Project 11: Cardputer Performance Lab (Frame Time, ISR Latency, and DSP Throughput)

Build a reproducible benchmark harness for graphics, ISR timing, CPU usage, and DSP throughput, then optimize based on measured evidence.

Quick Reference

Attribute Value
Difficulty Expert
Time Estimate 2 weeks
Main Programming Language C
Alternative Programming Languages C++, Rust
Coolness Level Level 4
Business Potential Level 3
Prerequisites Timing instrumentation basics, display pipeline understanding
Key Topics Frame budgets, latency percentiles, dirty rectangles, FFT optimization

1. Learning Objectives

  1. Measure frame-time and ISR-latency percentiles correctly.
  2. Optimize UI redraw strategy using dirty rectangles and batching.
  3. Tune audio pipeline for real-time FFT without dropouts.
  4. Build a benchmark report that survives peer review.

2. All Theory Needed

2.1 Latency and Throughput Metrics

Frame average alone hides spikes. Percentile tracking (p95/p99) captures user-visible stalls and is mandatory for real-time firmware quality.

2.2 Graphics and Audio Tradeoffs

Display and audio compete for bus, CPU, and memory bandwidth. Tuning requires controlled test scenarios and fixed workloads.

3. Project Specification

  • Build benchmark scenarios: idle UI, heavy redraw, audio+UI concurrent.
  • Log frame-time, ISR latency, CPU partition, heap trend.
  • Apply at least three optimizations and compare before/after.

Example output:

I perf: frame_ms p50=8.2 p95=13.1 p99=16.4
I perf: isr_us p50=4 p95=11
I perf: fft_rate=73fps window=hann
I perf: ui+audio underrun=0

4. Architecture

[Workload Driver] -> [Instrumentation Layer] -> [Metrics Store]
                                      -> [Optimization Switches]
                                      -> [Report Generator]

5. Implementation Guide

Core question:

“Which changes improve real user responsiveness, not just synthetic averages?”

Design questions:

  1. What is your frame budget?
  2. What is your acceptable p95 ISR latency?
  3. Which tasks can be deprioritized under load?

6. Testing Strategy

  • Golden benchmark scenario with fixed inputs.
  • Regression thresholds in CI for key metrics.
  • Stress run with concurrent audio + full UI interaction.

7. Pitfalls

  • Optimizing for average while p99 worsens.
  • Instrumentation overhead skewing results.
  • Inconsistent FFT configuration during comparisons.

8. Extensions

  • Add automated benchmark trend graphs.
  • Add feature flags for runtime optimization toggles.

9. Completion Checklist

  • Benchmarks are deterministic and versioned.
  • At least three optimizations show quantified impact.
  • No audio underruns in target stress scenario.