Project 5: Dual-Core Audio Synthesizer

Build a real‑time audio synth with DMA output and split the workload across two cores.

Quick Reference

Attribute Value
Difficulty Level 3: Advanced
Time Estimate 1-2 weeks
Main Programming Language C
Alternative Programming Languages Rust
Coolness Level Level 5: “Realtime DSP”
Business Potential 3. Audio tools and embedded instruments
Prerequisites DMA basics, timers, basic DSP math
Key Topics Audio sampling, DMA double buffer, multicore scheduling

1. Learning Objectives

  1. Generate audio waveforms at 44.1/48 kHz.
  2. Stream audio samples via DMA to PWM or I2S DAC.
  3. Split UI/control vs audio rendering across two cores.
  4. Implement envelopes and simple filters.
  5. Prove glitch‑free audio with buffer underrun detection.

2. All Theory Needed (Per-Concept Breakdown)

2.1 Digital Audio Sampling and DSP Basics

Fundamentals

Digital audio represents sound as discrete samples at a fixed sample rate. Each sample is an amplitude value, and the DAC/PWM converts it back to analog. Nyquist tells us the maximum representable frequency is half the sample rate.

Deep Dive into the concept

At 48 kHz, each sample is 20.8 µs apart. If your render loop misses a deadline, you get glitches. Waveforms like sine, square, and saw are computed from phase accumulators. Envelopes shape amplitude over time (attack/decay/sustain/release). Simple filters (one‑pole low‑pass) can smooth harsh signals. You must be mindful of fixed‑point vs floating‑point: the RP2040 lacks an FPU, so fixed‑point can improve speed. A stable audio system is a scheduling system: rendering must finish before the DMA consumes the next buffer.

How this fits on projects

This is central to §3.2 and §5.10 Phase 1. Also used in: P06-multicore-real-time-scheduler.md.

Definitions & key terms

  • Sample rate -> samples per second
  • Nyquist -> max frequency = sample_rate/2
  • Phase accumulator -> advances waveform phase each sample

Mental model diagram (ASCII)

Oscillator -> Envelope -> Mixer -> Buffer -> DMA -> DAC/PWM

Minimal concrete example

phase += phase_inc; sample = sin_table[phase >> 24];

Key insights

  • Audio is a real‑time deadline problem.

2.2 DMA Double Buffering for Audio

Fundamentals

DMA double buffering alternates between two buffers: while DMA plays one buffer, the CPU fills the other. This ensures continuous audio output.

Deep Dive into the concept

A double buffer is a producer/consumer pipeline. DMA consumes samples at a fixed rate; the audio renderer produces samples in blocks. If the renderer is late, the DMA reads stale or uninitialized samples, causing pops. You must set buffer size (e.g., 256 samples) and the number of channels. Smaller buffers reduce latency but increase CPU load. Larger buffers reduce CPU load but increase latency. On RP2040, DMA pacing uses a DREQ tied to PWM or PIO.

How this fits on projects

This underpins §4.2 and §5.10 Phase 2.

Definitions & key terms

  • Double buffer -> two alternating audio buffers
  • Underrun -> DMA reads data before it is ready

Mental model diagram (ASCII)

Buffer A (DMA)  Buffer B (render)
Buffer B (DMA)  Buffer A (render)

Key insights

  • Buffer size is a tradeoff between latency and safety.

2.3 Multicore Scheduling and Audio Priority

Fundamentals

Two cores allow you to isolate time‑critical audio rendering from UI/IO tasks. Core 0 can handle USB/MIDI/UI, while core 1 renders audio buffers.

Deep Dive into the concept

If you mix control logic and audio on one core, a slow UI task can starve audio rendering. With two cores, you pin audio rendering to one core and lock it to a timer interrupt or a DMA completion interrupt. Communication happens through lock-free queues or spinlocks. The key is to ensure that parameter changes (like note on/off) are passed to the audio core at safe boundaries (e.g., buffer boundaries) to avoid clicks.

How this fits on projects

This is used in §4.1 and §5.10 Phase 3. Also used in: P06-multicore-real-time-scheduler.md.

Key insights

  • Real‑time audio should never share a core with unpredictable tasks.

3. Project Specification

3.1 What You Will Build

A dual‑core audio synthesizer that outputs audio via PWM or I2S DAC, with multiple oscillators, envelopes, and optional MIDI input.

3.2 Functional Requirements

  1. Generate at least 4 voices of polyphonic audio.
  2. DMA streaming with double buffering.
  3. Core 1 renders audio; core 0 handles UI/MIDI.
  4. Detect and log buffer underruns.

3.3 Non-Functional Requirements

  • Performance: stable 48 kHz audio for 10 minutes.
  • Reliability: no clicks under normal load.

3.4 Example Usage / Output

[Synth] sample_rate=48000
[Synth] voices=4
[Synth] note on: C4

3.5 Data Formats / Schemas / Protocols

  • Audio buffer: int16 samples interleaved
  • Control messages: ring buffer of note events

3.6 Edge Cases

  • CPU overload causes underrun.
  • Large polyphony causes clipping.

3.7 Real World Outcome

You hear clean, glitch‑free tones; changing notes and filters feels responsive. Logs show zero underruns during a 10‑minute run.


4. Solution Architecture

4.1 High-Level Design

Core0: UI/MIDI -> Queue -> Core1: Audio Render -> DMA -> DAC/PWM

4.2 Key Components

| Component | Responsibility | Key Decisions | |———-|—————–|—————| | Audio engine | Render samples | Fixed‑point vs float | | DMA output | Stream audio | PWM vs I2S | | Event queue | Note events | Lock-free ring buffer |


5. Implementation Guide

5.1 Development Environment Setup

brew install cmake ninja arm-none-eabi-gcc

5.2 Project Structure

pi-synth/
├── src/
│   ├── audio.c
│   ├── dma_audio.c
│   ├── ui.c
│   └── multicore.c

5.3 The Core Question You’re Answering

“How do I guarantee real‑time audio on a tiny MCU?”

5.4 Concepts You Must Understand First

  1. Audio sampling (see §2.1)
  2. DMA buffering (see §2.2)
  3. Multicore scheduling (see §2.3)

5.10 Implementation Phases

  • Phase 1: Audio generation on single core
  • Phase 2: DMA streaming with double buffer
  • Phase 3: Split work across cores and add UI/MIDI

6. Testing Strategy

  • Sine wave frequency test (440 Hz)
  • Underrun detection test under CPU load

7. Common Pitfalls & Debugging

  • Buffer underruns (clicks)
  • Clipping due to mixing without scaling

8. Extensions & Challenges

  • Add filter resonance
  • Add wavetable oscillators

9. Real-World Connections

  • Embedded instruments, alarm tones, low‑cost DSP devices

10. Submission / Completion Criteria

Minimum Viable Completion:

  • Clean 440 Hz sine wave output.

Full Completion:

  • Polyphonic synth with envelopes.

Excellence:

  • MIDI input and multi‑effect chain.