Project 11: Universal SDR Console (Capstone)
Build a unified SDR workstation with a live waterfall, click-to-tune receivers, modular demodulators/decoders, session logging, and deterministic replay.
Quick Reference
| Attribute | Value |
|---|---|
| Difficulty | Level 5: Master+ |
| Time Estimate | 6-10 weeks |
| Main Programming Language | Python (UI) + C/Rust for hot DSP (Alternatives: C++/Qt, Rust/egui) |
| Alternative Programming Languages | C++, Rust, Julia |
| Coolness Level | Extreme |
| Business Potential | 8/10 (monitoring, research, compliance tools) |
| Prerequisites | Completed P01-P10 or equivalent SDR/DSP experience |
| Key Topics | Multi-rate DSP graphs, real-time buffering, resampling, plugin architecture, observability |
1. Learning Objectives
By completing this project, you will:
- Design and implement a multi-rate DSP graph that feeds multiple decoders from a shared IQ stream.
- Build a real-time scheduling and buffering strategy that prevents UI stalls and decoder dropouts.
- Implement sample-rate correction and consistent timestamping across all pipeline stages.
- Create a plugin API for demodulators and decoders with a unified event schema.
- Deliver a deterministic replay mode that reproduces decode results from recorded IQ.
2. All Theory Needed (Per-Concept Breakdown)
2.1 Multi-Rate DSP Graphs and Channelization
Fundamentals
A universal SDR console must handle signals with wildly different bandwidths: FM broadcast (200 kHz), ADS-B (2 MHz capture but 1 MHz effective bursts), AIS (9.6 kHz), and GPS (1.023 Mcps). Processing all of these at the raw capture rate wastes CPU and introduces latency. Multi-rate DSP graphs solve this by dividing the pipeline into stages with explicit sample-rate changes. The core idea is to frequency-translate a sub-band to baseband, filter it tightly, then decimate to the smallest safe rate for that decoder. This reduces both computational cost and noise bandwidth. Channelization is the act of splitting a wideband stream into multiple narrower channels, typically using frequency translation plus low-pass filtering and decimation. Your console must implement channelization correctly or you will see aliasing, unusable SNR, and broken decoders. Understanding how decimation, filter transition bands, and FFT bin spacing interact is the difference between a stable multi-decoder system and a glitchy demo.
Deep Dive into the Concept
A DSP graph is a directed acyclic network of blocks that transform streams of samples. In a multi-rate graph, some edges change the sample rate. The rule is simple but unforgiving: whenever you decimate (reduce sample rate), you must filter the signal so that no energy above the new Nyquist limit remains. A practical channelization approach for SDR is: (1) start with wideband IQ at Fs, (2) shift the desired channel to 0 Hz using a complex oscillator, (3) low-pass filter to the channel bandwidth plus transition, (4) decimate by an integer factor, (5) optionally resample to an exact rate if the decoder needs it. For a live console, you will do this multiple times in parallel: one branch for the waterfall (wideband, maybe lightly decimated), one for audio demod (FM/AM), and one for each digital decoder.
The graph design must balance CPU cost with latency. A common pattern is a shared, wideband input buffer that feeds multiple tap points. Each tap performs its own frequency translation and decimation. If you are not careful, you will duplicate heavy filtering work. A more efficient approach uses polyphase filter banks (PFBs) or channelizers that split the entire band into equally spaced channels. That is powerful but complex. For this project, a simpler approach is acceptable: per-decoder mixing and filtering, with careful decimation choices. The cost is CPU, but the benefit is clarity and correctness.
Consider a 2.4 MSPS capture. To decode AIS at 9.6 kbps, you can mix the AIS channel to baseband, apply a 15 kHz low-pass filter, then decimate by 100 or 125 to get around 19.2 kHz or 24 kHz. That is a 100x reduction in samples. Your ADS-B decoder might stay at 2.4 MSPS because pulse timing is in microseconds. Your FM audio path can decimate to 200 kHz for demod, then down to 48 kHz for audio playback. The graph is not just a set of blocks; it is a set of bandwidth promises. Each decimation stage makes a promise: “I removed out-of-band energy so aliasing is controlled.” If you break that promise, you create spectral images that corrupt your decoder.
Multi-rate graphs also require consistent metadata. A block should never be ambiguous about its sample rate, center frequency, or timestamp alignment. Your event schema must carry this metadata at each boundary. If an FM demod block assumes 200 kHz but receives 250 kHz, de-emphasis and audio filters will be wrong. If a decoder assumes timestamps are in capture time but they are in wall-clock, logs are misleading. This is why professional SDR frameworks track stream tags. In your console, you should implement a lightweight version: every buffer has a fs, fc, t0 (start time), and sample_index reference. That allows decoders and the UI to agree on frequency axes, time axes, and decode windows.
Designing the graph is about trade-offs. Fewer stages means lower latency but higher CPU. More stages mean better isolation but more latency and complexity. For a workstation, you want deterministic behavior: every decoder should get consistent sample rates and not be starved by a heavy waterfall rendering pass. That implies separating the graph into branches with explicit buffer boundaries and ring buffers. A shared wideband buffer feeding multiple decimators is often the best starting point.
How this fits on projects
- You will apply channelization decisions in Section 4.1 and Section 4.2 when defining the architecture.
- You will implement decimation chains in Section 5.10 Phase 2.
- You will reuse this concept directly from P03 FM Receiver and P06 AIS Decoder.
Definitions & key terms
- Channelization: Splitting a wideband stream into narrower channels.
- Decimation: Reducing sample rate by an integer factor after filtering.
- Polyphase filter bank (PFB): Efficient filter structure for channelizers.
- Transition band: Frequency region between passband and stopband of a filter.
- Stream metadata: Sample rate, center frequency, and timing information attached to buffers.
Mental model diagram (ASCII)
[Wideband IQ Fs] --> [Tap 1: Waterfall] --> [FFT]
|
+--> [Mix+LPF] --> [Decimate] --> [FM Demod] --> [Audio]
|
+--> [Mix+LPF] --> [Decimate] --> [AIS Decoder]
|
+--> [Mix+LPF] --> [ADS-B Decoder]
How it works (step-by-step, with invariants and failure modes)
- Capture IQ at Fs with a known center frequency.
- For each decoder, mix to baseband using its target offset.
- Low-pass filter to the channel bandwidth plus margin.
- Decimate to a rate that preserves the signal bandwidth.
- Hand off a buffer with updated metadata.
Invariants:
- After decimation, the signal bandwidth must be < new Nyquist.
- Buffer metadata must reflect the new sample rate and center frequency.
Failure modes:
- Aliasing if filtering is too wide or missing.
- Decoder instability if sample rate metadata is wrong.
- CPU overload if too many high-rate branches are active.
Minimal concrete example
# Pseudo-code: channelize a wideband buffer for AIS
bb = mix(iq, freq_offset_hz=+125e3, fs=2.4e6)
filtered = lowpass(bb, cutoff=15e3, fs=2.4e6)
chan = decimate(filtered, factor=100) # fs = 24 kHz
chan.meta = {"fs": 24e3, "fc": 162.025e6, "t0": t0}
Common misconceptions
- “I can decimate first, then filter.” This causes aliasing; filtering must come first.
- “If it looks fine in the waterfall, decoders will work.” Decoders need precise timing and SNR.
- “One sample rate fits all.” Each decoder has its own minimum rate requirement.
Check-your-understanding questions
- Why must you low-pass filter before decimation?
- If you decimate by 10, how does the noise bandwidth change?
- What metadata must be updated after a rate change?
- Why might you keep ADS-B at full rate while decimating AIS?
Check-your-understanding answers
- To remove out-of-band energy that would alias into the passband.
- The noise bandwidth shrinks by 10x, improving SNR for narrowband signals.
- Sample rate, center frequency, and timing reference.
- ADS-B pulses require microsecond timing; AIS does not.
Real-world applications
- Spectrum monitoring systems that track multiple services simultaneously.
- Multi-channel receivers in maritime or aviation monitoring stations.
Where you will apply it
- See Section 4.1 for the high-level graph and Section 5.10 Phase 2 for decimator placement.
- Also used in: P03 FM Receiver, P04 ADS-B Decoder, P06 AIS Decoder.
References
- “Understanding Digital Signal Processing” (Lyons), multi-rate processing chapters.
- “Software-Defined Radio for Engineers” (Collins), channelization sections.
Key insights
A DSP graph is a set of bandwidth promises; if you keep the promises, decoders are stable.
Summary
Multi-rate DSP graphs let you run multiple decoders efficiently by reducing sample rates only after proper filtering. The design choices you make determine CPU cost, latency, and decoder reliability.
Homework/Exercises to practice the concept
- Design a decimation chain for FM audio that ends at 48 kHz from a 2.4 MSPS capture.
- Calculate the minimum safe sample rate for AIS (9.6 kbps) and explain your choice.
- Draw a DSP graph that supports waterfall + FM + ADS-B simultaneously.
Solutions to the homework/exercises
- Example: 2.4e6 -> 240k (decimate by 10) -> 48k (decimate by 5) with low-pass at each step.
- A safe rate is 4-5x symbol rate, so 48 kHz or 24 kHz after filtering is reasonable.
- A three-branch graph with shared capture and per-branch mix/filter/decimate blocks.
2.2 Real-Time Streaming, Buffering, and Backpressure
Fundamentals
A live SDR console is a real-time system: samples keep arriving, and if you do not process them in time, they are lost. The system must balance throughput (processing enough samples per second) and latency (responding quickly to user actions). Real-time buffering solves this by decoupling producers (device capture) from consumers (DSP blocks and UI) using ring buffers or queues. However, buffering is not free: too small and you overflow, too large and you add latency. Backpressure is the mechanism that prevents runaway growth. In SDR, backpressure might drop frames, reduce waterfall update rate, or pause a non-critical decoder. Designing this system correctly is essential for a stable user experience.
Deep Dive into the Concept
Think of the SDR console as multiple pipelines that share a single clock: the device sample clock. A capture thread produces IQ blocks at a fixed rate. If any downstream stage blocks, the capture thread cannot wait indefinitely; it must either drop data or overwrite old buffers. This is why ring buffers are common in SDR systems. A ring buffer stores the latest N samples and overwrites old data when full. This trades completeness for freshness, which is usually acceptable in real-time monitoring.
Backpressure is how you decide which data to drop. A waterfall display can skip frames without harm; an ADS-B decoder cannot, because missing a few microseconds can destroy a frame. Therefore, you must classify pipelines by priority. High-priority decoders should receive continuous data with minimal drops. Low-priority features (like logging a full-rate IQ file) can be throttled when CPU is tight. This implies a scheduling policy. One practical approach: a capture thread writes to a shared ring buffer; each decoder reads from it with its own read pointer; if a decoder falls behind by more than a threshold, it skips ahead and logs a “drop” event. For waterfall, you simply render the most recent block.
Latency is also a function of block size. Larger blocks mean fewer context switches and better FFT efficiency, but they increase end-to-end latency. For interactive tuning, you want the waterfall and audio to respond in under 200 ms. That suggests block sizes on the order of tens of milliseconds. For ADS-B, you want microsecond-level timing, but you can still process in blocks if you keep enough overlap to catch preambles that straddle block boundaries.
A critical design decision is whether the pipeline is push-based (producers push data to consumers) or pull-based (consumers request data). Push-based systems can saturate consumers; pull-based systems can starve the pipeline if a consumer is slow. A hybrid approach works well: capture pushes into a ring buffer; consumers pull from it at their own pace, with explicit “latest” or “all” semantics. This is how you prevent UI stalls: the UI always renders the latest block, not every block.
Threading strategy matters. A typical design uses at least three threads: capture, DSP/decoders, and UI. The capture thread should do almost no work besides reading and enqueuing. The DSP thread performs heavy math and posts results to a queue. The UI thread renders and handles user input. If you let the DSP run on the UI thread, you will get visible stutters. If you let the capture thread do heavy work, you will get overruns and missing data.
Finally, consider determinism. For replay mode, you want the same results every run. That implies deterministic block boundaries, fixed resampler phases, and a consistent scheduler (or a simulated time stepper). In replay, you can process without real-time deadlines. You can still use the same graph, but you drive it with a virtual clock. This approach makes debugging far easier because it eliminates jitter as a variable.
How this fits on projects
- You will define thread responsibilities in Section 4.1 and Section 5.1.
- You will implement ring buffers and backpressure in Section 5.2 and Section 5.10 Phase 2.
- You will use this concept when integrating real-time ADS-B and AIS decoders from P04 and P06.
Definitions & key terms
- Ring buffer: Fixed-size circular buffer for streaming data.
- Backpressure: A mechanism to slow or drop data when consumers lag.
- Latency budget: Maximum acceptable delay from input to output.
- Overrun: Capture loss due to buffer exhaustion.
- Drop policy: Rules that decide which data to discard.
Mental model diagram (ASCII)
[SDR Device] --> [Capture Thread] --> [Ring Buffer]
/ | \
[DSP] [Waterfall] [Logger]
| | |
[Decoders] [UI Render] [Disk]
How it works (step-by-step, with invariants and failure modes)
- Capture thread reads blocks and writes to a ring buffer.
- DSP thread reads blocks, processes decoders, emits events.
- UI thread renders the latest available visualization.
- Backpressure policy drops or skips if buffers overflow.
Invariants:
- Capture thread must never block on heavy DSP.
- UI must render from the latest available data.
Failure modes:
- Buffer overruns cause gaps in decoder input.
- UI stutter if rendering shares heavy DSP work.
- Hidden latency if buffer sizes are too large.
Minimal concrete example
# Pseudo-code: ring buffer with drop-on-overflow
rb = RingBuffer(capacity_samples=2_000_000)
def capture_loop():
while running:
block = sdr.read(num_samples)
rb.write(block) # overwrites old data if full
def dsp_loop():
while running:
block = rb.read_latest(block_size)
process_block(block)
Common misconceptions
- “Bigger buffers always make it safer.” They increase latency and can hide bugs.
- “Dropping data means decoding is useless.” Dropping selectively can preserve real-time usability.
- “One thread is simpler.” It is simpler but not reliable for real-time work.
Check-your-understanding questions
- Why is a ring buffer preferred over an unbounded queue for IQ data?
- What happens if the DSP thread blocks the capture thread?
- How do you decide which pipeline gets priority when CPU is tight?
- Why does block size affect UI latency?
Check-your-understanding answers
- It limits memory growth and keeps the system real-time by overwriting old data.
- Samples are lost, causing decoder gaps and failed demodulation.
- Prioritize decoders with strict timing (ADS-B) over visualizations or logging.
- Larger blocks take longer to fill, so UI updates lag input changes.
Real-world applications
- Real-time spectrum monitoring stations.
- SDR-based intrusion detection or compliance monitoring systems.
Where you will apply it
- See Section 4.2 for component roles and Section 5.10 Phase 1 for threading and buffers.
- Also used in: P01 Spectrum Eye and P07 NOAA APT.
References
- “Computer Systems: A Programmer’s Perspective” (Bryant/O’Hallaron), performance and concurrency chapters.
- “Operating Systems: Three Easy Pieces”, scheduling and synchronization chapters.
Key insights
Real-time SDR is a throughput problem with a latency budget; buffering is how you pay that budget.
Summary
Buffering and backpressure allow a live SDR console to keep up with streaming IQ data. A good design isolates capture, DSP, and UI into separate execution contexts and handles overload by dropping non-critical data.
Homework/Exercises to practice the concept
- Implement a ring buffer that supports multiple independent read pointers.
- Simulate a slow DSP stage and verify that the UI remains responsive.
- Measure end-to-end latency for different block sizes.
Solutions to the homework/exercises
- Use a shared write index and per-reader read indices with wrap handling.
- Insert a sleep in DSP; the UI should render the latest data without freezing.
- Latency roughly equals block_size / sample_rate plus processing time.
2.3 Resampling, Clock Correction, and Timebase Alignment
Fundamentals
Different decoders expect specific sample rates. FM audio wants a neat 48 kHz, AIS might be comfortable at 24 kHz, and GPS acquisition needs a precise relationship between code chips and sample clock. Meanwhile, low-cost SDRs drift due to oscillator error (PPM), which effectively changes the true sample rate and center frequency. Resampling and clock correction align the sample stream to the assumptions in your DSP algorithms. Without it, decoders may work briefly and then fail as timing drifts. A universal console must make sample rate consistency a first-class concern.
Deep Dive into the Concept
Resampling changes the sample rate of a signal, usually by a rational factor L/M. The goal is to map samples from one time grid to another while preserving the underlying analog signal. The simplest method is interpolation followed by decimation, but for real-world SDR, you need band-limited interpolation to avoid imaging. Polyphase resamplers and windowed-sinc filters are standard choices. In a console, you can use a high-quality resampler library or implement a modest one yourself if performance allows.
Clock correction is related but distinct. The device sample rate is nominal; a 2.4 MSPS setting might actually be 2,399,500 samples per second due to oscillator error. If your decoder assumes exactly 2.4 MSPS, symbol timing will drift. Two strategies exist: (1) measure and correct the device PPM once, or (2) implement a feedback loop that estimates timing error from the signal and adjusts resampling continuously. For example, in FM audio, a small error is tolerable. In GPS or GSM, it is not. Therefore, your console should apply a global PPM correction at capture time and allow per-decoder fine resampling if needed.
Timebase alignment also matters when combining streams. If you feed a waterfall and a decoder from the same capture, their timestamps must match or your UI will lie. The best approach is to define a global timebase: sample index 0 corresponds to time t0. Every buffer carries an index and a timestamp. If you resample, you must compute the new timestamps. A simple rule: if you resample by L/M, then sample index i in the output corresponds to i * (M/L) samples of input time. Without this bookkeeping, you cannot align decoded frames with the waterfall or with external logs.
Resampling introduces delay. Most FIR-based resamplers have a group delay equal to half the filter length. In a console, this delay must be either compensated or at least documented in the event metadata. Otherwise, you will see decoded messages appear “late” relative to their RF bursts in the waterfall. This is an easy bug to create and a hard one to debug. Therefore, treat resampler delay as part of your timing model.
Another subtlety is resampler determinism. If you want replay to be deterministic, your resampler must have a consistent phase start and not depend on real-time chunking. That means you need to manage filter state across block boundaries. This is a common failure mode: decoding works on full files but breaks in live streaming because state is reset between blocks. Ensure that your resampler maintains state across blocks for each pipeline.
Finally, note that not all sample-rate errors are constant. Temperature drift and USB clock jitter can produce small variations. If you want robust decoding for long sessions (hours), you need a mechanism to measure drift. For digital decoders, the timing recovery loop can feed a correction factor to a resampler. That closes the loop and keeps symbol timing stable. Even if you do not implement that fully, you should design your API so that a decoder can request small resampling adjustments.
How this fits on projects
- You will specify resampling requirements in Section 3.2 and Section 4.2.
- You will implement PPM correction in Section 5.1 and resampler state handling in Section 5.10 Phase 2.
- You will reuse this concept from P03 FM Receiver and P10 GPS Tracker.
Definitions & key terms
- Resampling: Changing sample rate while preserving signal content.
- PPM correction: Adjusting for oscillator frequency error in parts-per-million.
- Group delay: The time delay introduced by a filter or resampler.
- Timing recovery: Estimating and correcting symbol clock error.
- Timebase: The reference mapping from sample index to time.
Mental model diagram (ASCII)
Nominal Fs 2.400000 MSPS
Actual Fs 2.399500 MSPS (PPM error)
|
v
[PPM Correction] --> [Resampler] --> [Decoder expects exact Fs]
How it works (step-by-step, with invariants and failure modes)
- Measure or configure device PPM error.
- Apply a resampler to correct the global sample rate.
- Maintain resampler state across blocks.
- Update timestamps and sample indices after resampling.
Invariants:
- Output samples must preserve the original signal bandwidth.
- Timestamp mapping must be monotonic and accurate.
Failure modes:
- Decoder drift if sample rate is assumed but not corrected.
- UI/deserializer mismatch if timestamps are not adjusted.
- Latency spikes if resampler state resets each block.
Minimal concrete example
# Pseudo-code: apply global PPM correction
ppm = -1.2
fs_nom = 2.4e6
fs_true = fs_nom * (1 + ppm/1e6)
corrected = resample(iq, fs_in=fs_true, fs_out=fs_nom)
Common misconceptions
- “PPM only shifts frequency.” It also changes the effective sample rate.
- “Resampling is optional.” It is required for long-duration, narrowband decodes.
- “Filter delay is negligible.” It can be tens of milliseconds in practice.
Check-your-understanding questions
- Why does PPM error affect symbol timing?
- What happens if you resample without preserving filter state?
- How do you compute timestamp mapping after resampling by L/M?
- Why does group delay matter for UI alignment?
Check-your-understanding answers
- Because the sampling clock defines the time axis for symbols.
- The output will have discontinuities and timing glitches at block boundaries.
- Output time = input_time * (M/L) with an added fixed filter delay.
- It shifts decoded events later than their visible RF bursts.
Real-world applications
- GNSS receivers that track satellite signals over long durations.
- Cellular decoders that require precise symbol timing.
Where you will apply it
- See Section 3.5 for data formats and Section 5.10 Phase 2 for resampler integration.
- Also used in: P05 RDS Decoder and P10 GPS Tracker.
References
- “Digital Signal Processing” (Smith), resampling and interpolation chapters.
- “Understanding Digital Signal Processing” (Lyons), multi-rate sections.
Key insights
Sample rate is a promise; resampling is how you keep it honest.
Summary
Resampling and clock correction keep decoders aligned with their expected timebases. Without them, long-running decodes drift and eventually fail.
Homework/Exercises to practice the concept
- Measure the frequency offset of an FM station and estimate PPM error.
- Implement a simple rational resampler and verify tone frequency stability.
- Compute group delay for a 101-tap FIR resampler filter.
Solutions to the homework/exercises
- PPM = (measured_offset / carrier_freq) * 1e6.
- After resampling, a 1 kHz tone should remain 1 kHz within a few Hz.
- Group delay is (N-1)/2 samples, so 50 samples at the output rate.
2.4 Plugin Architecture and Observable Event Schema
Fundamentals
A universal console must be extensible. You will not want to edit core code every time you add a decoder. A plugin architecture solves this by defining a standard interface for demodulators and decoders. Each plugin receives a stream of samples plus metadata and emits standardized events (frames, audio, images, metrics). The UI consumes these events without knowing decoder internals. This separation makes the system maintainable and testable.
Deep Dive into the Concept
A good plugin architecture for SDR has three layers: (1) signal input contracts, (2) processing lifecycle, (3) event output schema. The input contract defines what the plugin expects: sample rate, center frequency, and sample format. The lifecycle defines how the plugin is initialized, configured, started, stopped, and reconfigured (e.g., when the user retunes). The output schema defines how results are communicated. A consistent schema is critical for logging and replay. For example, all decoders might emit events with fields: ts, type, source, snr, cfo, payload, and confidence. This allows the UI to render a generic event table and the logger to store consistent JSON lines.
You must design the schema so it can represent diverse outputs: ADS-B frames, AIS positions, RDS text, NOAA images, and audio. One approach is to define a base event with common fields and allow a payload object for decoder-specific data. Another approach is to define a small set of event types: frame, text, image, audio, metric. Each event can include standardized fields and a decoder-specific payload. A unified schema also supports deterministic replay: if you log events and re-play them later, the UI can render the same timeline without reprocessing IQ.
Plugins also need access to configuration. For example, an FM demod plugin might accept a de-emphasis constant and a mono/stereo toggle. An ADS-B plugin might accept a preamble threshold. The plugin API should expose a configure() function that accepts validated parameters. The UI should generate the configuration UI from plugin metadata (name, parameters, types, ranges). This is not strictly required but it improves extensibility.
From a systems perspective, plugins should be sandboxed. A buggy decoder should not crash the whole app. In practice, you can isolate plugins by running them in separate threads or processes and communicating via queues. Process isolation is safer but more complex. For this project, thread isolation with careful exception handling is acceptable. Ensure that plugin exceptions produce clear error events and do not terminate the capture thread.
Event schema design is where observability lives. If a decoder fails, you want to know why. Therefore, include metrics like SNR, CFO, timing error, and CRC status in the event stream. These metrics allow you to correlate decoder performance with waterfall features. For example, if ADS-B CRC failures spike when the noise floor rises, you can see that in logs and UI graphs. The schema should also include a stable decoder ID so that multiple instances (e.g., two FM channels) can be distinguished.
Finally, think about versioning. If you change the schema later, old logs might break. Include a schema version in every log file and in the event stream. This is a simple but powerful design decision that saves time later.
How this fits on projects
- You will define the plugin API in Section 4.2 and Section 5.2.
- You will build the event log in Section 3.5 and Section 5.10 Phase 3.
- You will reuse decoders from P04 ADS-B, P06 AIS, and P07 NOAA APT as plugins.
Definitions & key terms
- Plugin: A modular decoder/demodulator with a standard interface.
- Event schema: A structured format for decoder outputs and metrics.
- Lifecycle: The initialize/start/stop/reconfigure flow of a plugin.
- Isolation: Preventing one plugin from crashing the whole system.
- Schema versioning: Labeling log formats to support upgrades.
Mental model diagram (ASCII)
[IQ Stream] --> [Plugin A] --> [Event Bus] --> [UI Panels]
\-> [Plugin B] --> [Event Bus] --> [Logger]
How it works (step-by-step, with invariants and failure modes)
- Core loads plugins and reads their declared inputs.
- UI builds configuration controls from plugin metadata.
- Plugins process sample blocks and emit events.
- Event bus routes events to UI and logger.
Invariants:
- Events must conform to the schema version.
- Plugins must not block the capture thread.
Failure modes:
- Schema mismatch breaks logs and UI rendering.
- A slow plugin causes downstream lag if not isolated.
Minimal concrete example
# Example event schema (JSON line)
{
"ts": 1704067200.123,
"schema": "sdr.event.v1",
"type": "frame",
"source": "adsb",
"snr": 12.4,
"cfo": -350,
"payload": {"icao": "A12B3C", "df": 17}
}
Common misconceptions
- “Plugins are just function calls.” They need lifecycle and configuration handling.
- “Logs are optional.” They are essential for reproducible debugging.
- “One schema fits all.” You need both common fields and flexible payloads.
Check-your-understanding questions
- Why is a schema version important for logs?
- What fields should every decoder event include?
- How can you keep a slow plugin from breaking the UI?
- Why should the UI not depend on decoder-specific data structures?
Check-your-understanding answers
- So older logs remain readable after schema changes.
- Timestamp, type, source, and confidence/metric fields.
- Run plugins in separate threads and enforce timeouts.
- It would tightly couple UI to decoders and block extension.
Real-world applications
- Modular SDR toolkits used in research labs.
- Compliance monitoring tools that integrate multiple decoders.
Where you will apply it
- See Section 3.5 for log formats and Section 5.11 for key API decisions.
- Also used in: P05 RDS Decoder and P08 POCSAG.
References
- “Clean Architecture” (Martin), component boundaries.
- “Design Patterns” (GoF), plugin and factory patterns.
Key insights
A stable event schema turns decoders into interchangeable modules and makes debugging reproducible.
Summary
A plugin architecture with a unified event schema keeps the console extensible, testable, and observable. It is the glue that makes multi-decoder integration possible.
Homework/Exercises to practice the concept
- Design a JSON schema for ADS-B and AIS events that share common fields.
- Implement a minimal plugin interface and a dummy plugin that emits test events.
- Add a schema version and write a small validator.
Solutions to the homework/exercises
- Use
type,source,ts,snr,payloadand keep payload decoder-specific. - Define
init(),process(block),shutdown()methods with a shared event queue. - The validator checks required fields and version string before accepting events.
3. Project Specification
3.1 What You Will Build
A desktop SDR console that can capture live IQ, render a waterfall, tune to multiple frequencies, demodulate AM/FM audio, and run modular decoders (ADS-B, AIS, RDS, NOAA APT, GPS). The console logs every event with metadata and supports deterministic replay from IQ files.
Included:
- Live IQ capture via SoapySDR or rtl_sdr.
- Waterfall + spectrum view with markers.
- Click-to-tune with per-channel decimators.
- AM/FM demod and audio output.
- Plugin decoders for ADS-B, AIS, RDS, NOAA APT, GPS.
- Event log (JSON lines) and session metadata.
- Replay mode with deterministic timing.
Excluded:
- Transmission, jamming, or any illegal intercept.
- Decryption or decoding of protected content.
- Full-fledged signal generator.
3.2 Functional Requirements
- Live Capture: Read IQ from SDR hardware at configurable Fs and center frequency.
- Waterfall UI: Render at >=20 FPS with markers and dB scaling.
- Channelization: Support at least 3 simultaneous channels with independent tuning.
- Demodulators: AM and FM audio output with selectable de-emphasis.
- Decoders: ADS-B, AIS, RDS, NOAA APT, GPS plugins.
- Event Logging: JSON lines log with schema version and timestamps.
- Replay Mode: Deterministic decode results from a fixed IQ file.
- Metrics: Display SNR, CFO, CRC stats per decoder.
- Session Persistence: Save user presets (last tuned frequencies, gains).
3.3 Non-Functional Requirements
- Performance: Maintain UI FPS >=20 while at least one decoder is active.
- Reliability: No crashes on plugin failure; emit error events instead.
- Usability: UI includes clear labels, frequency readouts, and decoder status.
3.4 Example Usage / Output
$ ./sdr_console --device soapy=rtlsdr --fs 2.4M --center 1090M
[INFO] Session: 2026-01-01T10:00:00Z
[INFO] Waterfall: 30 FPS | RBW: 585.9 Hz
[INFO] Decoder ADS-B: enabled
[INFO] Event log: sessions/2026-01-01T10-00-00Z.jsonl
3.5 Data Formats / Schemas / Protocols
Event Log (JSON lines):
{"schema":"sdr.event.v1","ts":1704067200.123,"type":"frame","source":"adsb","snr":12.4,"cfo":-350,"payload":{"icao":"A12B3C","df":17}}
Session Metadata (JSON):
{"schema":"sdr.session.v1","fs":2400000,"center":1090000000,"device":"rtlsdr","ppm":-1.2}
3.6 Edge Cases
- Two decoders request incompatible sample rates.
- Capture overruns when disk logging is enabled.
- User retunes while a decoder is mid-frame.
- SDR device returns short read blocks.
- Event log disk full or unwritable.
3.7 Real World Outcome
You will run a live SDR workstation that visibly decodes real-world signals and produces reproducible logs.
3.7.1 How to Run (Copy/Paste)
./sdr_console --device soapy=rtlsdr --fs 2.4M --center 1090M \
--enable-adsb --enable-waterfall --log sessions/demo.jsonl --seed 42
3.7.2 Golden Path Demo (Deterministic)
Use a fixed IQ file test_sessions/adsb_fm_combo.iq with a fixed seed (42) for noise injection.
Expected:
- Waterfall shows a stable FM carrier at +200 kHz.
- ADS-B panel shows at least 10 valid frames with CRC OK.
- Event log contains deterministic frame counts and timestamps.
3.7.3 CLI Transcript (Exact)
$ ./sdr_console --input test_sessions/adsb_fm_combo.iq --replay 1.0 --log out.jsonl --seed 42
[INFO] Replay mode: 1.0x
[INFO] Waterfall: 20 FPS
[ADS-B] Frames: 12 | CRC ok: 10 | CRC fail: 2
[FM] Audio disabled in replay
[OK] Wrote out.jsonl
[EXIT] code=0
3.7.4 Failure Demo (Bad Input)
$ ./sdr_console --input missing.iq --replay 1.0
[ERROR] Input file not found: missing.iq
[EXIT] code=2
3.7.7 If GUI / Desktop / Mobile
Screens:
- Main Waterfall View
- Decoder Panels (ADS-B, AIS, RDS, NOAA, GPS)
- Session Log Viewer
- Preferences (gain, ppm, audio output)
ASCII Wireframe:
+-----------------------------------------------------+
| SDR Console | Center: 1090.000 MHz | FS: 2.4 MSPS |
+--------------------------+--------------------------+
| Waterfall + Spectrum | Decoder Panels |
| [waterfall image] | ADS-B: 12 frames/min |
| [spectrum line] | AIS: disabled |
| Marker: 1090.000 MHz | RDS: disabled |
| | NOAA: idle |
+--------------------------+--------------------------+
| Event Log (last 10) |
| 10:00:01 ADS-B ICAO A12B3C CRC OK |
| 10:00:02 ADS-B ICAO C4D5E6 CRC OK |
+-----------------------------------------------------+
4. Solution Architecture
4.1 High-Level Design
[SDR Device]
|
v
[Capture Thread] --> [Wideband Ring Buffer] --> [Channelizer]
| | | |
| v v v
| [FM] [ADS-B] [AIS]
| | | |
v v v v
[Waterfall] [Events Bus] [Logger]
|
v
[UI]
4.2 Key Components
| Component | Responsibility | Key Decisions | |———–|—————-|—————| | Capture Thread | Read IQ from device | Use minimal processing to avoid overruns | | Ring Buffer | Store recent IQ | Overwrite-oldest policy | | Channelizer | Mix/filter/decimate | Per-decoder taps vs PFB | | Decoder Plugins | Decode signals | Standard plugin interface | | Event Bus | Route events to UI/log | JSON schema + version | | UI Renderer | Waterfall and panels | Separate UI thread |
4.4 Data Structures (No Full Code)
// Example event structure
struct SdrEvent {
double ts;
char type[16];
char source[16];
float snr;
float cfo;
void* payload; // decoder-specific
};
4.4 Algorithm Overview
Key Algorithm: Multi-branch Channelizer
- Read block from ring buffer.
- For each active channel, mix to baseband.
- Filter and decimate to target rate.
- Dispatch to decoder plugin.
Complexity Analysis:
- Time: O(N * K) where K is number of active channels.
- Space: O(N) for ring buffer + per-channel buffers.
5. Implementation Guide
5.1 Development Environment Setup
# Python UI + C/Rust DSP
python3 -m venv .venv
source .venv/bin/activate
pip install numpy scipy pyqtgraph
# Optional: build C/Rust DSP library
5.2 Project Structure
project-root/
|-- src/
| |-- main.py
| |-- capture.py
| |-- channelizer.py
| |-- plugins/
| | |-- adsb.py
| | |-- ais.py
| | `-- fm.py
| |-- ui/
| | |-- waterfall.py
| | `-- panels.py
| `-- events.py
|-- tests/
| |-- test_channelizer.py
| `-- test_events.py
`-- README.md
5.3 The Core Question You’re Answering
“How do I build one SDR workstation that can decode multiple protocols without collapsing under real-time constraints?”
5.4 Concepts You Must Understand First
Stop and research these before coding:
- Multi-rate DSP graphs
- Can you explain why decimation must follow filtering?
- Do you know how to choose target rates for each decoder?
- Real-time buffering
- What is a ring buffer and why is it preferred for streaming?
- How do you handle backpressure without freezing the UI?
- Resampling and PPM correction
- How do you compute and apply PPM correction?
- How does resampling affect timestamps?
- Plugin architecture
- What is the minimal interface for a decoder plugin?
- How do you log events in a unified schema?
5.5 Questions to Guide Your Design
- What is the smallest set of sample rates that serve all decoders?
- Which pipelines are allowed to drop frames when overloaded?
- How will you keep the UI smooth when DSP is heavy?
- What does a “decoder event” look like across protocols?
5.6 Thinking Exercise
Sketch a pipeline that simultaneously decodes ADS-B and plays FM audio. Indicate where you mix, filter, and decimate, and where you update metadata.
5.7 The Interview Questions They’ll Ask
- How do you prevent buffer overruns in real-time SDR?
- What are the trade-offs between a ring buffer and a queue?
- How do you design a plugin architecture for decoders?
- How do you ensure deterministic replay for debugging?
5.8 Hints in Layers
Hint 1: Build the event bus early Design the schema and logging first; it will drive decoder integration.
Hint 2: Separate threads Capture thread: IQ only. DSP thread: heavy math. UI thread: rendering.
Hint 3: Use metadata tags
Every buffer should carry fs, fc, and t0.
Hint 4: Add replay mode Replay mode can bypass real-time scheduling and make bugs reproducible.
5.9 Books That Will Help
| Topic | Book | Chapter | |——-|——|———| | Multi-rate DSP | “Understanding Digital Signal Processing” | Ch. 10-13 | | Concurrency | “Operating Systems: Three Easy Pieces” | Ch. 26-30 | | Architecture | “Clean Architecture” | Ch. 10-12 | | SDR systems | “Software-Defined Radio for Engineers” | Ch. 8-10 |
5.10 Implementation Phases
Phase 1: Foundation (2-3 weeks)
Goals:
- Build capture thread and ring buffer
- Render a basic waterfall
Tasks:
- Implement capture loop and buffer writes.
- Implement a simple waterfall from the latest buffer.
Checkpoint: Waterfall runs at >=20 FPS with live capture.
Phase 2: Core Functionality (2-4 weeks)
Goals:
- Channelizer and decimators
- Add AM/FM demod and ADS-B decoder plugin
Tasks:
- Implement per-decoder mix/filter/decimate chains.
- Integrate ADS-B plugin with event logging.
Checkpoint: ADS-B frames decode while waterfall remains smooth.
Phase 3: Integration & Polish (2-3 weeks)
Goals:
- Add AIS, RDS, NOAA, GPS plugins
- Add replay mode and session persistence
Tasks:
- Integrate remaining decoders.
- Implement replay engine and deterministic tests.
Checkpoint: Replay yields identical event counts across runs.
5.11 Key Implementation Decisions
| Decision | Options | Recommendation | Rationale | |———-|———|—————-|———–| | Buffer strategy | Ring buffer vs queue | Ring buffer | Prevents unbounded memory growth | | Plugin isolation | Threads vs processes | Threads | Lower overhead, simpler IPC | | Channelizer | PFB vs per-decoder mix | Per-decoder mix | Simpler to implement | | Event schema | Rigid vs flexible | Flexible payload + fixed header | Supports diverse decoders |
6. Testing Strategy
6.1 Test Categories
| Category | Purpose | Examples | |———-|———|———-| | Unit Tests | Validate individual blocks | FIR filter, resampler, event validator | | Integration Tests | Validate pipelines | FM demod pipeline with real IQ file | | Stress Tests | Validate real-time stability | Replay at 4x speed, multi-decoder load |
6.2 Critical Test Cases
- Replay determinism: Same IQ file yields same event counts and timestamps.
- Buffer overrun handling: Overload does not crash; logs drop events.
- Retune stability: Retuning while decoding does not crash and updates metadata.
6.3 Test Data
- test_sessions/adsb_fm_combo.iq
- test_sessions/ais_pass.iq
- test_sessions/rds_station.iq
7. Common Pitfalls & Debugging
7.1 Frequent Mistakes
| Pitfall | Symptom | Solution | |———|———|———-| | UI freezes under load | Waterfall stutters | Move DSP off UI thread | | Wrong sample rate metadata | Decoders fail sporadically | Audit metadata propagation | | Missing resampler state | Audio glitches at block boundaries | Preserve filter state across blocks |
7.2 Debugging Strategies
- Event log replay: Reproduce failures with deterministic inputs.
- Metrics dashboards: Plot SNR, CFO, CRC over time.
- Binary search in graph: Disable half the decoders to find the offender.
7.3 Performance Traps
- Excessive copying of IQ buffers between threads.
- Running FFT on full-rate stream for every channel.
- Logging full-rate IQ to disk while decoding.
8. Extensions & Challenges
8.1 Beginner Extensions
- Add a simple frequency scanner that hops channels.
- Add a “bookmark” button to save a frequency preset.
8.2 Intermediate Extensions
- Implement a simple PFB channelizer to reduce CPU.
- Add a spectrum occupancy heatmap over time.
8.3 Advanced Extensions
- Add remote control API for headless operation.
- Implement GPU-accelerated FFTs for high-rate capture.
9. Real-World Connections
9.1 Industry Applications
- Regulatory compliance: monitor emissions and interference.
- Aviation and maritime: live tracking with ADS-B/AIS.
9.2 Related Open Source Projects
- GNU Radio: flowgraph-based SDR framework.
- SDR++: modern SDR console with plugins.
9.3 Interview Relevance
- Systems design questions about real-time pipelines.
- DSP questions about decimation and resampling.
10. Resources
10.1 Essential Reading
- “Software-Defined Radio for Engineers” (Collins) Ch. 8-10
- “Understanding Digital Signal Processing” (Lyons) Ch. 10-13
- “Clean Architecture” (Martin) Ch. 10-12
10.2 Video Resources
- “GNU Radio Conference: Multi-rate DSP” (talks)
- “SDR++ architecture walkthrough” (community videos)
10.3 Tools & Documentation
- SoapySDR: hardware abstraction for SDR devices.
- rtl-sdr: common RTL device tools.
10.4 Related Projects in This Series
- P01 Spectrum Eye: visualization foundation.
- P04 ADS-B Decoder: high-value decoding.
- P10 GPS Tracker: advanced tracking loops.
11. Self-Assessment Checklist
11.1 Understanding
- I can explain why decimation must follow filtering.
- I can explain backpressure and ring buffer trade-offs.
- I can explain how resampling affects timestamps.
11.2 Implementation
- Waterfall stays at >=20 FPS with decoders running.
- Event logs are deterministic in replay mode.
- Retuning updates all metadata correctly.
11.3 Growth
- I documented one major performance bottleneck and its fix.
- I can add a new decoder plugin in under 1 hour.
12. Submission / Completion Criteria
Minimum Viable Completion:
- Live waterfall with stable tuning.
- At least one decoder plugin (ADS-B) producing events.
- Deterministic replay for a fixed IQ file.
Full Completion:
- Multiple decoders running simultaneously with stable UI.
- Event logs include SNR, CFO, CRC for each decoder.
- Session presets persist and reload.
Excellence (Going Above & Beyond):
- GPU-accelerated FFTs or PFB channelizer.
- Remote control API with documented schema.