Project 2: The System Tick Interrupt

Build a precise 1 ms timebase using SysTick and prove its accuracy with GPIO and UART measurements.

Quick Reference

Attribute Value
Difficulty Intermediate
Time Estimate 1-2 weeks
Main Programming Language C (Alternatives: Rust, Ada)
Alternative Programming Languages Rust, Ada
Coolness Level High
Business Potential Medium
Prerequisites Project 1 completion, basic interrupt concepts
Key Topics SysTick, clock tree, interrupt latency, ISR design, timebase accuracy

1. Learning Objectives

By completing this project, you will:

  1. Configure SysTick to generate a precise 1 ms interrupt.
  2. Implement a low-latency ISR that updates a system tick counter.
  3. Validate tick accuracy and jitter using GPIO toggles and UART logs.
  4. Explain how clock configuration affects RTOS timing.

2. All Theory Needed (Per-Concept Breakdown)

2.1 SysTick Timer and System Clock

Fundamentals

SysTick is a 24-bit down-counter built into every Cortex-M core. It is designed to generate a periodic interrupt and is commonly used as the heartbeat of an RTOS. The counter decrements at a configurable rate derived from the CPU clock, and when it reaches zero it triggers an exception and reloads. To use SysTick correctly you must know the system clock frequency, configure the reload value, and decide whether to use the core clock or a prescaled reference clock. The relationship is simple: reload = (clock_hz / tick_hz) - 1. Any error in the clock frequency or reload calculation directly becomes timing drift in your scheduler. This concept connects the electrical reality of a microcontroller clock to the software abstraction of time.

Additional fundamentals: A tick is just a convention over raw clock cycles. The tick frequency determines the scheduler granularity and the resolution of all delays. If you choose a 1 ms tick, every timeout is quantized to that step size. This makes the tick a design decision, not a default setting, and it should be documented and justified.

Deep Dive into the concept

SysTick sits inside the core and is part of the System Control Block. It has three primary registers: CTRL, LOAD, and VAL. LOAD sets the reload value (max 0xFFFFFF), VAL holds the current count, and CTRL enables the counter and its interrupt. The counter can run from the core clock or an external reference (implementation-dependent). On STM32, SysTick often runs from the core clock divided by 8 when CLKSOURCE is cleared, or from the full core clock when set. This choice affects tick resolution and power consumption. For a 1 ms tick on a 168 MHz CPU, the reload is 168000-1 when using the core clock, but 21000-1 when using the divided clock. If SystemCoreClock is wrong because PLL or prescalers are misconfigured, your tick will drift, and every timeout, delay, or periodic task will inherit that drift.

The 24-bit limit also matters. At high clocks, the maximum period is limited; at 168 MHz, max SysTick period is about 0.1 seconds, so you must use a faster tick and build longer delays in software. The COUNTFLAG bit in CTRL is set when the counter wraps; reading it can be useful for diagnostics, but relying on it in the ISR is redundant because the exception already indicates a wrap. Another subtlety is that SysTick is a core exception with configurable priority. If you set it too high, it can preempt critical work and increase latency for higher-priority IRQs. If you set it too low, it can be delayed and inject jitter into your timebase. In an RTOS, SysTick is often a medium-priority interrupt that drives scheduling without starving peripheral ISRs.

Verification is as important as configuration. A common pattern is to toggle a GPIO in the SysTick ISR, giving you a direct, observable waveform to measure with a scope. If the tick is perfect, you will see a stable frequency; if it drifts, you can compute the error. For UART logging, you must ensure that the logging itself does not significantly perturb timing, which is why UART prints are usually deferred to a task rather than emitted in the ISR. This project is about establishing trust in your timebase so you can later build scheduling, delays, and timers on top of it.

Additional depth: SysTick configuration must be tied to your clock tree configuration. Many MCUs start from a default internal oscillator and then switch to a PLL to reach full speed. If you compute the SysTick reload before the PLL is stable, you can end up with incorrect timing that only fails under certain conditions. A good practice is to configure the clock tree first, wait for the PLL lock, update SystemCoreClock, and only then configure SysTick. If your system supports dynamic clock scaling, you must also reconfigure SysTick whenever the clock changes, otherwise your tick frequency will drift.

SysTick also has an interaction with debug features. Some MCUs pause SysTick when the debugger halts the core; others do not. This matters when you single-step or set breakpoints: if SysTick continues while you are halted, your tick count may jump unexpectedly. You should understand your MCU’s debug behavior and, if needed, disable SysTick during debug to keep timing consistent. Additionally, if you rely on COUNTFLAG polling, be aware that it is cleared on read; reading it in multiple places can hide events. For RTOS design, relying on the SysTick interrupt itself rather than COUNTFLAG is more deterministic.

Another practical detail is the choice of tick rate. A 1 ms tick is common, but not always optimal. A faster tick provides better scheduling granularity but increases CPU overhead. A slower tick reduces overhead but increases latency and reduces timer resolution. This tradeoff matters for power consumption and for meeting deadlines. Many modern RTOSes support tickless idle, where the SysTick is suppressed when no tasks are READY. Understanding these tradeoffs early will help you design systems that balance responsiveness and efficiency.

How this fit on projects

SysTick is the heartbeat used in Sec. 3.1 and Sec. 3.7 for accuracy measurements. It is also a foundational requirement for Projects 4, 5, 8, and 10 where timing and scheduling depend on a stable tick.

Definitions & key terms

  • SysTick: Cortex-M timer for periodic interrupts.
  • Reload value: The count value that sets the tick period.
  • COUNTFLAG: Bit indicating the timer wrapped.
  • Clock tree: CPU and peripheral clock derivation network.

Mental model diagram (ASCII)

CPU clock -> [SysTick down-counter] -> interrupt -> tick++
                 | reload
                 v
              1 ms

How it works (step-by-step, with invariants and failure modes)

  1. Configure clock tree and compute SystemCoreClock.
  2. Set LOAD to (clock / tick) - 1 and clear VAL.
  3. Enable SysTick and its interrupt.
  4. ISR increments tick and toggles GPIO.
  5. Failure mode: wrong clock leads to drift; wrong priority leads to jitter.

Minimal concrete example

volatile uint32_t g_tick;

void SysTick_Handler(void) {
    g_tick++;
    GPIOA_BSRR = (1u << 5);   // toggle pin for measurement
}

void systick_init(uint32_t core_hz) {
    SysTick->LOAD = (core_hz / 1000u) - 1u;
    SysTick->VAL = 0;
    SysTick->CTRL = SysTick_CTRL_CLKSOURCE_Msk |
                    SysTick_CTRL_TICKINT_Msk |
                    SysTick_CTRL_ENABLE_Msk;
}

Common misconceptions

  • “SysTick always runs at CPU clock” -> it can be prescaled.
  • “Tick accuracy is guaranteed” -> it depends on clock config.
  • “Logging in ISR is harmless” -> it increases latency and jitter.

Check-your-understanding questions

  1. How do you compute the SysTick reload for a 1 ms tick?
  2. What happens if SystemCoreClock is half of the real clock?
  3. Why might you choose the divided clock for SysTick?

Check-your-understanding answers

  1. (core_hz / 1000) - 1.
  2. The tick fires twice as fast, causing time to run too quickly.
  3. To reduce power or fit within the 24-bit reload limit.

Real-world applications

  • RTOS scheduling heartbeats.
  • Periodic sensor sampling loops.
  • Timeouts for communication protocols.

Where you’ll apply it

  • This project: Sec. 3.1, Sec. 3.7.2, Sec. 5.10 Phase 2.
  • Also used in: Project 5 and Project 10.

References

  • ARM Cortex-M SysTick documentation.
  • STM32F4 Reference Manual, RCC and clock configuration.

Key insights

A stable SysTick is the foundation of every time-based RTOS feature.

Summary

SysTick converts clock cycles into a reliable timebase. If it is wrong, every higher-level timing guarantee collapses.

Homework/Exercises to practice the concept

  1. Compute reload values for 1 ms and 10 ms ticks at 84 MHz and 168 MHz.
  2. Measure the tick frequency with a scope and calculate drift.
  3. Reconfigure SysTick to use the divided clock and compare jitter.

Solutions to the homework/exercises

  1. 84 MHz: 84000-1 and 840000-1; 168 MHz: 168000-1 and 1680000-1.
  2. Measure the GPIO toggling frequency and compare to expected.
  3. Expect lower frequency but similar jitter; validate with measurements.

2.2 Interrupt Latency and ISR Design

Fundamentals

Interrupt latency is the time from an interrupt event to the first instruction executed in its handler. In an RTOS, ISR latency determines how quickly the system can react to time-critical events and how accurately ticks are serviced. Latency is affected by CPU clock speed, the current interrupt masking level, and the stack/context saving that the hardware performs automatically. A well-designed ISR is short, deterministic, and defers heavy work to tasks. If an ISR is too long, it increases jitter for other interrupts and can violate real-time deadlines.

Additional fundamentals: ISR design is about predictability. The shorter and more deterministic the handler, the more stable your timing. This is why RTOS kernels keep ISRs minimal and defer work to tasks. If you understand the sources of latency, you can design ISRs that respect deadlines even under load.

Deep Dive into the concept

On Cortex-M, the hardware automatically pushes a subset of registers (R0-R3, R12, LR, PC, xPSR) when an interrupt occurs. This hardware stacking is fast but not free; it sets the base for minimum latency. If the CPU is already servicing another interrupt of higher or equal priority, the new interrupt will be delayed until the current handler completes (unless tail-chaining can apply). Tail-chaining is a feature where the CPU avoids redundant stack operations when switching directly from one ISR to another. This reduces latency but only helps when priorities allow. If you disable interrupts globally or set BASEPRI to mask lower-priority interrupts, SysTick latency can increase significantly and cause missed deadlines.

ISR design is about bounded work. The goal is to keep the handler deterministic in execution time (low WCET). To achieve this, ISRs should avoid loops over unbounded data, avoid printing over UART, and avoid waiting on locks. Instead, they should capture minimal state (timestamps, event flags, or a single data sample) and signal a task to handle the rest. This is known as deferred interrupt handling. In later projects, this is the core idea behind queue-based ISR-to-task communication. The same principle applies to the SysTick ISR: it should increment a tick counter and possibly set a PendSV for context switching, but not do scheduling decisions that could take variable time.

Latency measurement itself must be careful. A common technique is to toggle a GPIO pin at ISR entry and exit; the time from event to the rising edge is latency, and the pulse width is ISR duration. For SysTick, the event is internal, so you approximate latency by observing consistent phase relative to a known clock. Another method is to use the DWT cycle counter to capture timestamps. But you must ensure that reading DWT does not significantly affect the ISR time; it adds a few cycles, but it is deterministic and often acceptable.

Understanding interrupt latency is also essential for priority configuration. SysTick is typically assigned a lower priority than urgent peripheral interrupts, but higher than background tasks. If SysTick is too low, it can be delayed by long peripheral ISRs, increasing timebase jitter. If too high, it can interrupt those peripherals and cause data loss. The correct priority is a system-level tradeoff that you must decide based on your use case. This project makes you measure and reason about that tradeoff instead of accepting defaults.

Additional depth: Interrupt latency depends not only on ISR length but also on how you structure critical sections. If you use PRIMASK to disable interrupts for long periods, even high-priority interrupts will be delayed. If you instead use BASEPRI, you can allow urgent interrupts to fire while masking only lower-priority ones. This is a powerful tool for reducing jitter in the system tick without compromising critical peripheral timing. The tradeoff is complexity: BASEPRI values must be carefully chosen and aligned to the priority grouping in the NVIC.

In many RTOS designs, the SysTick ISR does the bare minimum (increment tick, maybe set PendSV). But the scheduler itself is often run at PendSV or in a separate kernel context to keep SysTick predictable. The time in SysTick should be bounded and preferably constant. If you add conditionals or loops, you can unintentionally create variable ISR time, which appears as jitter. When measuring latency, you should record both the time from the event to ISR entry and the time inside the ISR. These are different metrics with different causes.

Another subtle point is that interrupt latency may increase when the CPU services a sequence of back-to-back interrupts. Cortex-M tail-chaining improves this, but it is still limited by priority ordering. If you have a long-running high-priority ISR, lower-priority interrupts (including SysTick) will be delayed, which can manifest as missed ticks. The fix is to keep all ISRs short and to push work into tasks. This project teaches you to measure these effects, but the deeper lesson is that latency is a system-level property, not just a property of one ISR.

How this fit on projects

You will design the SysTick ISR to be minimal (Sec. 3.2) and measure its latency and execution time (Sec. 3.7.2). The same principles apply to queues and deferred processing in Project 7.

Definitions & key terms

  • Latency: Time from interrupt event to handler execution.
  • ISR: Interrupt Service Routine.
  • Tail-chaining: Skipping redundant stacking when switching ISRs.
  • BASEPRI: Cortex-M register used to mask lower-priority interrupts.

Mental model diagram (ASCII)

IRQ event -> HW stacking -> ISR entry -> minimal work -> signal task

How it works (step-by-step, with invariants and failure modes)

  1. Interrupt event occurs.
  2. CPU saves context and switches to handler.
  3. ISR performs minimal work and exits quickly.
  4. Failure mode: long ISR delays others, causing jitter.

Minimal concrete example

void SysTick_Handler(void) {
    GPIOA_BSRR = (1u << 5); // pulse for timing
    g_tick++;
    GPIOA_BSRR = (1u << (5 + 16));
}

Common misconceptions

  • “ISR priority doesn’t matter” -> it determines who can preempt whom.
  • “UART printf in ISR is fine” -> it adds unpredictable latency.
  • “Disabling interrupts is harmless” -> it blocks critical timing.

Check-your-understanding questions

  1. What is tail-chaining and why does it reduce latency?
  2. Why should SysTick ISR be short and deterministic?
  3. How does BASEPRI affect SysTick latency?

Check-your-understanding answers

  1. It allows back-to-back ISRs without full context restore, saving cycles.
  2. Long ISRs cause jitter and can delay higher-priority interrupts.
  3. If BASEPRI masks SysTick, the tick will be delayed.

Real-world applications

  • Motor control loops where latency directly affects stability.
  • Communication stacks that require precise ISR timing.

Where you’ll apply it

References

  • ARM Cortex-M exception entry/exit timing.
  • “Real-Time Concepts for Embedded Systems” by Qing Li, Ch. 10-11.

Key insights

Short, deterministic ISRs are the difference between predictable time and chaos.

Summary

Interrupt latency and ISR design directly govern the accuracy of your system tick and any real-time guarantee.

Homework/Exercises to practice the concept

  1. Measure ISR duration with a GPIO pulse and compute cycle counts.
  2. Add a deliberate delay to the ISR and quantify the increase in jitter.
  3. Change SysTick priority and observe how other ISRs are affected.

Solutions to the homework/exercises

  1. Use a scope and multiply pulse width by clock frequency.
  2. Insert a busy loop and observe increased tick jitter.
  3. Lower SysTick priority and observe longer delays under load.

3. Project Specification

3.1 What You Will Build

A firmware module that configures SysTick for a 1 ms period, increments a global tick counter in the ISR, toggles a GPIO for measurement, and prints a tick count to UART once per second.

Included:

  • SysTick configuration and ISR
  • GPIO timing pulse
  • UART periodic reporting task

Excluded:

  • Full scheduler (later projects)
  • Complex time services

3.2 Functional Requirements

  1. Tick Accuracy: SysTick fires every 1 ms with minimal drift.
  2. Low ISR Overhead: ISR execution < 10 us on target hardware.
  3. UART Reporting: Every 1000 ticks, send a timestamp line.

3.3 Non-Functional Requirements

  • Performance: Tick jitter < 5 us under no additional load.
  • Reliability: Tick counter does not overflow in 32-bit for at least 49 days.
  • Usability: Timing can be validated with a GPIO pin or UART output.

3.4 Example Usage / Output

[000001000] tick=1000
[000002000] tick=2000
[000003000] tick=3000

3.5 Data Formats / Schemas / Protocols

  • UART text line: [tick] tick=<value>

3.6 Edge Cases

  • Incorrect SystemCoreClock -> drifted tick.
  • Interrupts disabled too long -> missed ticks.
  • SysTick reload out of range -> no interrupts.

3.7 Real World Outcome

You can measure an exact 1 kHz GPIO toggle and see UART logs that confirm a stable 1-second cadence. The tick counter is trustworthy enough to build a scheduler.

3.7.1 How to Run (Copy/Paste)

make clean all
make flash

Exit codes:

  • make: 0 success, 2 build failure.
  • openocd: 0 flash success, 1 connection failure.

3.7.2 Golden Path Demo (Deterministic)

  1. Flash firmware and connect UART at 115200 baud.
  2. Observe a line every second with incrementing tick count.
  3. Probe the GPIO pin and measure 1 kHz square wave.

3.7.3 Failure Demo (Deterministic)

  1. Intentionally set the reload value to (clock/2000)-1.
  2. Flash and observe the UART counts incrementing twice as fast.
  3. Correct the value and verify timing returns to normal.

4. Solution Architecture

4.1 High-Level Design

SysTick ISR -> tick counter -> periodic UART print task
            -> GPIO pulse (measurement)

4.2 Key Components

Component Responsibility Key Decisions
systick.c Configure SysTick and ISR Core clock vs divided clock
uart.c Print tick status Non-blocking transmit
gpio.c Timing pulse output Use BSRR for atomic toggle

4.4 Data Structures (No Full Code)

volatile uint32_t g_tick;

4.4 Algorithm Overview

Key Algorithm: Tick Update

  1. SysTick ISR increments g_tick.
  2. If g_tick % 1000 == 0, signal UART print task.
  3. UART prints formatted status.

Complexity Analysis:

  • Time: O(1) per tick.
  • Space: O(1).

5. Implementation Guide

5.1 Development Environment Setup

brew install arm-none-eabi-gcc openocd

5.2 Project Structure

rtos-p02/
|-- src/
|   |-- systick.c
|   |-- uart.c
|   `-- main.c
|-- include/
|-- Makefile
`-- build/

5.3 The Core Question You’re Answering

“How do I convert raw CPU cycles into a precise, verifiable millisecond timebase?”

5.4 Concepts You Must Understand First

  1. SysTick registers and reload calculation.
  2. Clock tree configuration and SystemCoreClock.
  3. ISR timing and minimal handler design.

5.5 Questions to Guide Your Design

  1. What is the correct reload value for 1 ms?
  2. How will you confirm tick accuracy without disturbing the ISR?
  3. Which interrupt priority should SysTick use?

5.6 Thinking Exercise

Calculate the drift after 10 minutes if your clock value is off by 0.5%.

5.7 The Interview Questions They’ll Ask

  1. “Why is SysTick commonly used for RTOS ticks?”
  2. “How do you measure ISR latency on real hardware?”
  3. “What happens when interrupts are disabled for too long?”

5.8 Hints in Layers

Hint 1: Use the core clock Start with CLKSOURCE = 1 to avoid confusion.

Hint 2: Toggle a GPIO Use a scope to confirm frequency quickly.

Hint 3: Defer UART Print in a task or main loop, not inside ISR.


5.9 Books That Will Help

Topic Book Chapter
Timer interrupts “Real-Time Concepts for Embedded Systems” Ch. 11
Clock configuration “Making Embedded Systems” Ch. 4-5

5.10 Implementation Phases

Phase 1: SysTick Bring-Up (2-3 days)

Goals: Configure SysTick and see GPIO pulse. Tasks: Implement SysTick init, toggle GPIO in ISR. Checkpoint: 1 kHz toggle verified by scope.

Phase 2: Tick Accounting (2-3 days)

Goals: Maintain reliable tick counter. Tasks: Increment g_tick, add 1-second UART reporting. Checkpoint: UART prints a line every second.

Phase 3: Jitter Measurement (2-3 days)

Goals: Quantify ISR latency/jitter. Tasks: Measure pulse width and timing variance. Checkpoint: Jitter < 5 us under idle conditions.

5.11 Key Implementation Decisions

Decision Options Recommendation Rationale
SysTick clock source Core vs divided Core Simplifies reload calculations
UART logging ISR vs task Task Avoid ISR delays
GPIO toggle method ODR vs BSRR BSRR Atomic updates

6. Testing Strategy

6.1 Test Categories

| Category | Purpose | Examples | |——————|—————————–|——————————-| | Unit Tests | Validate math | Reload calculations | | Integration Tests | Tick + UART + GPIO | Full flashing test | | Edge Case Tests | Drift and overflow | Long run test |

6.2 Critical Test Cases

  1. Tick frequency: 1 kHz pulse measured by scope.
  2. UART cadence: 1 line per second for 5 minutes.
  3. Priority stress: Add another ISR and confirm jitter bounds.

6.3 Test Data

Expected tick frequency: 1000 Hz
Acceptable jitter: < 5 us (idle)

7. Common Pitfalls & Debugging

7.1 Frequent Mistakes

| Pitfall | Symptom | Solution | |—————————–|—————————-|———————————-| | Wrong reload value | Tick too fast or slow | Recompute from SystemCoreClock | | UART in ISR | Large jitter | Defer UART to main/task | | SysTick priority too high | Peripheral ISR delays | Lower SysTick priority |

7.2 Debugging Strategies

  • Use a scope on GPIO for immediate validation.
  • Print SystemCoreClock once at boot for sanity.

7.3 Performance Traps

  • Printing per tick is catastrophic for timing; print at coarse intervals.

8. Extensions & Challenges

8.1 Beginner Extensions

  • Add a second GPIO toggle every 100 ticks.
  • Add a compile-time option for 10 ms tick.

8.2 Intermediate Extensions

  • Implement a tickless idle mode (suspend SysTick when idle).
  • Add DWT cycle counter timestamps in ISR.

8.3 Advanced Extensions

  • Implement dynamic tick rate changes for low power.
  • Measure latency under heavy DMA traffic.

9. Real-World Connections

9.1 Industry Applications

  • Periodic control loops in industrial automation.
  • Precise timeouts in wireless communication stacks.
  • FreeRTOS: SysTick setup in port layer.
  • Zephyr: SysTick driver and tickless mode.

9.3 Interview Relevance

  • Many embedded interviews ask how to configure SysTick and measure jitter.

10. Resources

10.1 Essential Reading

  • ARM SysTick documentation.
  • “Real-Time Concepts for Embedded Systems” by Qing Li, Ch. 10-11.

10.2 Video Resources

  • Cortex-M timer interrupt walkthroughs.

10.3 Tools & Documentation

  • OpenOCD and GDB for measurement.
  • Logic analyzer for jitter validation.

11. Self-Assessment Checklist

11.1 Understanding

  • I can compute reload values for any tick period.
  • I understand how ISR latency affects jitter.

11.2 Implementation

  • SysTick fires every 1 ms.
  • UART prints once per second without drift.

11.3 Growth

  • I can explain why ISR design must be minimal.

12. Submission / Completion Criteria

Minimum Viable Completion:

  • SysTick configured to 1 ms.
  • GPIO toggle measured at 1 kHz.

Full Completion:

  • UART logs show stable 1-second intervals.
  • Jitter measured and documented.

Excellence (Going Above & Beyond):

  • Tickless idle mode implemented and validated.
  • DWT-based latency measurement added.