Software Defined Radio (SDR) Mastery: From RF to Decoders

Goal

Goal: Build a deep, working understanding of how radio signals move from the air into code and back into meaningful data. By the end, you will capture raw IQ samples, analyze spectra, design and validate filters, implement analog and digital demodulators, and decode real-world protocols such as ADS-B, AIS, RDS/RBDS, NOAA APT, POCSAG, GSM broadcast channels, and GPS L1 C/A. You will understand the engineering trade-offs in RF front-end hardware, sampling, synchronization, and error control that make SDR systems work in practice. You will be able to diagnose decode failures using measurements (SNR, CFO, timing error) and systematically fix them. You will also know how to build legal, ethical, receive-only workflows that respect spectrum regulations.


Introduction

Software Defined Radio (SDR) is a radio where most of the signal processing chain is implemented in software rather than fixed hardware. An antenna and RF front-end capture energy from the air, but after the first conversion and sampling, everything becomes math: filtering, tuning, demodulation, decoding, and visualization. SDR lets you move quickly from raw electromagnetic waves to meaningful data because you can reconfigure the receiver with code instead of rewiring circuitry.

In standards language, SDR is defined by its ability to change operating parameters by software (frequency range, bandwidth, modulation, power level) based on the electromagnetic environment. Regulatory definitions align with this: the radio behavior can be altered without making hardware changes that affect RF emissions. In practice, that means you can build a spectrum analyzer one minute and a protocol decoder the next, using the same hardware.

What you will build (by the end of this guide):

  • A fast IQ waterfall spectrum analyzer with live tuning and measurements
  • AM and FM broadcast receivers with proper filtering and de-emphasis
  • Digital decoders for ADS-B (aircraft), RDS/RBDS (radio text), AIS (ships), POCSAG (pagers), and GSM broadcast control channels
  • A NOAA APT image decoder that produces full weather images
  • A GPS L1 C/A acquisition and tracking pipeline that reports PRN, code phase, and Doppler
  • A capstone SDR console that ties everything together

Scope (what is included):

  • RF front-end basics (filters, LNAs, mixers, oscillators)
  • Sampling, IQ, and receiver architectures (direct conversion, low-IF)
  • Spectrum analysis (FFT, windowing, noise floor, dB)
  • Digital filtering, resampling, and channelization
  • Analog demodulation (AM/FM/PM) and digital modulation (FSK/PSK/GMSK)
  • Synchronization (AGC, carrier recovery, timing recovery, Doppler)
  • Framing, CRC, error correction, and protocol decoding

Out of scope (for this guide):

  • Decryption or interception of private communications
  • Transmitting on restricted bands without proper authorization
  • Closed, proprietary commercial systems that are not legally receivable

The Big Picture (Mental Model)

[EM Wave]
    |
    v
[Antenna] -> [LNA/Filter] -> [Mixer/LO] -> [ADC]
    |
    v
[IQ Samples] -> [DSP Blocks] -> [Demodulator]
    |
    v
[Bits / Audio / Images] -> [Framing + CRC] -> [Protocol Decoder] -> [Applications]

Key Terms You Will See Everywhere

  • IQ (In-Phase / Quadrature): Two orthogonal samples that preserve amplitude and phase.
  • Baseband: The signal after frequency shifting to near 0 Hz.
  • SNR: Signal-to-noise ratio; determines decode reliability.
  • Symbol rate: How fast digital symbols are transmitted.
  • CFO: Carrier frequency offset between transmitter and receiver.
  • PLL/Costas loop: Feedback systems used to recover phase and frequency.

How to Use This Guide

  1. Read the Theory Primer first. It is the mini-book that builds your mental models.
  2. Start Project 1 immediately. It makes the theory real with actual IQ data.
  3. Cycle between theory and projects. Each project reinforces a subset of concepts.
  4. Keep a lab notebook. Log gain, sample rate, PPM, filter cutoffs, and screenshots.
  5. Respect legal boundaries. Decode only signals that are legal to receive in your region.

Prerequisites & Background

Essential Prerequisites (Must Have)

Programming Skills:

  • Comfortable with Python (arrays, file IO, plotting)
  • Familiar with basic C (for performance or real-time pipelines)
  • Able to read and implement algorithms from pseudocode

Math & DSP Fundamentals:

  • Complex numbers and Euler’s formula
  • Logarithms and decibels (dB)
  • Sampling theory (Nyquist, aliasing)
  • Basic probability (noise as random processes)

RF & Systems Basics:

  • Understanding of frequency, bandwidth, and modulation
  • Comfort with command-line tools and Linux
  • Awareness of receive-only legal boundaries in your country

Recommended Reading:

  • “Making Embedded Systems, 2nd Edition” by Elecia White - Ch. 1-3
  • “Computer Networks” by Tanenbaum - Ch. 2 (signals, modulation basics)

Helpful But Not Required

  • Prior RF test equipment experience (signal generator, spectrum analyzer)
  • GNU Radio familiarity
  • Digital communications theory (matched filters, BER curves)

Self-Assessment Questions

  1. Can you explain what a complex number means geometrically?
  2. Do you know why 20*log10(amplitude) is used for dB?
  3. Can you write code to process a binary file in chunks?
  4. Can you explain what happens when you sample below Nyquist?
  5. Can you explain why a signal appears mirrored without IQ?

If you answered “no” to questions 1-3: Spend 1-2 weeks on fundamentals before starting.

Development Environment Setup

Required Tools:

  • Linux machine (Ubuntu 22.04 or Debian 12 recommended)
  • Python 3.11+
  • numpy, scipy, matplotlib, sounddevice
  • rtl-sdr utilities (for capture and quick sanity checks)
  • An SDR receiver (RTL-SDR v3 or better) and basic antennas (VHF/UHF whip)

Recommended Tools:

  • GNU Radio (verification of DSP blocks)
  • SoapySDR (hardware abstraction)
  • gqrx or SDR++ (visual spectrum verification)
  • ffmpeg (audio extraction and filtering)
  • A TCXO or GPSDO-referenced SDR for narrowband digital modes

Testing Your Setup:

# Verify tools exist
$ which python3 rtl_sdr rtl_fm
/usr/bin/python3
/usr/bin/rtl_sdr
/usr/bin/rtl_fm

# Verify Python DSP stack
$ python3 - <<'PY'
import numpy, scipy, matplotlib
print('ok')
PY
ok

Time Investment

  • Simple projects (1-2): 4-8 hours each
  • Moderate projects (3, 5, 7, 8): 1-2 weeks each
  • Advanced projects (4, 6, 9, 10): 2-4 weeks each
  • Total sprint: 3-5 months for full mastery

Important Reality Check

RF is noisy and unpredictable. Expect imperfect captures, drift, and real-world interference. Your first implementation will be rough. The goal is iterative refinement:

  1. Get something working (even if noisy)
  2. Measure and identify what is wrong
  3. Fix it with targeted DSP improvements
  4. Repeat until it is stable

Big Picture / Mental Model

SDR is a pipeline that converts messy RF energy into structured data:

Antenna -> Front-End -> Sampling -> IQ -> DSP -> Demod -> Sync -> Framing -> Protocol
   |         |            |          |       |        |        |         |
   v         v            v          v       v        v        v         v
RF noise   gain/filters  ADC     baseband  filters  PLLs    CRC/FEC   app data

Key insight: Every stage removes ambiguity. Each project below makes one stage concrete.


Theory Primer

This is the textbook for the projects. Read it like chapters. Each chapter includes fundamentals, deep dives, step-by-step flows, and practical exercises.


Chapter 1: RF Front-End, Sampling, IQ, and Receiver Architectures

Fundamentals (read this first)

SDR starts in the analog world. An electromagnetic wave induces a tiny voltage on the antenna. The antenna does not know which station you want; it converts everything in its frequency range into voltage. The RF front-end conditions this raw signal so the ADC can capture it. This chain includes a band-pass filter (rejects out-of-band energy), a low-noise amplifier (LNA) to boost weak signals, and one or more mixers driven by a local oscillator (LO) to shift the desired band down to a frequency the ADC can sample. This shift is called downconversion.

Sampling converts continuous-time signals into discrete samples. The Nyquist-Shannon theorem says you must sample at least twice the bandwidth of the signal to avoid aliasing. SDR often captures a wide band (e.g., 2.4 MHz), then uses digital filters to isolate a narrower channel. ADC resolution also matters: an 8-bit ADC provides limited dynamic range; a 12- or 14-bit ADC provides much better separation between weak and strong signals. Dynamic range becomes critical when a strong signal sits near your weak signal of interest.

The most important SDR idea is complex baseband (IQ) sampling. Instead of a single real signal, SDR samples two orthogonal signals: in-phase (I) and quadrature (Q). I and Q are the x and y components of a rotating vector. This preserves both amplitude and phase, which lets you distinguish positive and negative frequency components. With IQ, you can tune a signal in software by multiplying by a complex exponential, effectively shifting the spectrum without touching hardware.

Receiver architecture matters. A direct conversion receiver mixes the desired band directly to 0 Hz. It is simple but suffers from DC offset, LO leakage, and flicker noise near 0 Hz. A low-IF receiver mixes to a small offset (e.g., 250 kHz) then shifts to DC digitally, reducing DC problems at the cost of more image rejection requirements. Understanding which architecture your hardware uses explains why you might see a stubborn DC spike or a mirrored spectrum.

Finally, gain staging is critical. Too little gain and quantization noise dominates. Too much gain and the ADC clips, causing intermodulation that cannot be undone digitally. You must learn to adjust gain so the signal is strong but not clipped. This is not a cosmetic tuning knob. It is part of the receiver design.

Front-end details matter more than most beginners expect. The antenna is not just a wire; it has polarization, impedance, bandwidth, and a radiation pattern. A mismatch between antenna impedance (often 50 ohms) and the front-end increases reflections and reduces received power. Polarization mismatch can attenuate signals by many dB. A broadband antenna is convenient but may admit strong unwanted signals. A tuned antenna or a preselector filter can greatly improve SNR by rejecting out-of-band energy before amplification. This is why some decoders suddenly work when you add a simple band-pass filter or an FM broadcast notch filter.

Sampling is also richer than just Nyquist. Real-valued sampling folds negative and positive frequencies onto each other, which is why IQ is so important. Complex sampling separates the spectrum into positive and negative frequencies, doubling usable bandwidth for a given sample rate. There is also bandpass sampling, where you intentionally let a high-frequency band alias into baseband in a controlled way. This can work if the band is narrow and the sample rate is chosen carefully, but it is fragile if the front-end filter is too wide. For SDR learning, it is better to downconvert to a known IF or baseband and avoid accidental aliasing.

Finally, there are practical metrics beyond “noise figure”. 1 dB compression point (P1dB) tells you how much signal power the front-end can handle before it becomes nonlinear. Third-order intercept (IP3) indicates how badly intermodulation products will appear when two strong signals are present. These are not just spec-sheet numbers; they predict whether your receiver will survive a strong FM station near your weak AIS target. You can use these metrics to decide whether you need additional filtering or an external LNA with better linearity.

Deep Dive into the Concept

A mixer multiplies the incoming RF signal by a sinusoid generated by the LO. In the frequency domain, this multiplication produces two shifted copies of the original spectrum: one at f + f_LO and one at f - f_LO. A filter selects the copy you care about. In older superheterodyne systems, this was repeated in multiple analog stages. SDR often uses one analog stage and then handles all further tuning digitally. This is why SDR is so flexible: you shift a spectrum by multiplying by exp(-j*2*pi*f_shift*t), not by turning coils.

IQ generation is rooted in the complex exponential representation of sinusoids. Any real sinusoid can be represented as the real part of a complex exponential. In an I/Q mixer, the LO is split into cosine and sine. Multiplying the RF by cosine produces the I channel; multiplying by sine produces the Q channel. Low-pass filters remove the high-frequency products. The result is a complex signal that contains both amplitude and phase. This complex representation eliminates the mirror-image ambiguity that real sampling suffers from. That is why a complex sampling rate of Fs yields roughly Fs of usable bandwidth instead of Fs/2.

Real-world ADCs are imperfect. Quantization noise arises from finite resolution. A good rule of thumb is that each additional bit adds about 6 dB of dynamic range. Aperture jitter (timing uncertainty) becomes important at high frequencies, effectively adding phase noise. DC offsets and IQ imbalance (gain and phase mismatch between I and Q) appear as spurs and mirrored signals. LO phase noise spreads a tone into a skirt of noise, reducing performance for weak narrowband signals. These are not theoretical edge cases; you will see them in your waterfall.

Receiver performance is often summarized with noise figure (NF) and sensitivity. Noise figure tells you how much the receiver degrades SNR relative to an ideal receiver. The first LNA dominates the system noise figure because of the Friis cascade formula. This is why a low-noise LNA and a clean front-end filter have outsized impact. At the same time, strong out-of-band signals can drive the ADC into compression, producing intermodulation products inside your band. A simple band-pass filter can dramatically improve your ability to decode weak signals.

Receiver architectures are an explicit trade-off between simplicity, cost, and performance. RTL-SDR sticks use low-cost tuners with 8-bit ADCs and wide front-end bandwidth. They are excellent for learning but limited in dynamic range. Airspy devices increase dynamic range. HackRF offers wide coverage and transmit capability, but the receive chain is noisier. An SDR engineer learns to select hardware appropriate to the signal type, just like a photographer chooses the right lens.

Another subtle issue is frequency reference accuracy. A 20 PPM oscillator at 100 MHz is 2 kHz off. Digital decoders with narrow symbol rates often fail if CFO is larger than a few hundred Hz. Many SDRs allow you to apply a PPM correction or accept an external 10 MHz reference (GPSDO). Understanding why this matters is essential for reliable decoding.

Finally, note the data representation of IQ. Many SDRs output interleaved I/Q bytes: [I0, Q0, I1, Q1, ...]. Some use unsigned bytes centered at 127; others use signed. Misinterpreting this causes mirrored spectra, inverted frequency axis, or strange artifacts. The first debugging step is always: verify the sample format and scale it correctly.

Image rejection is another real-world challenge. In a simple mixer, a signal at f_RF can produce an image at f_LO - f_RF. If the image falls into your passband, you will see a mirrored signal. Quadrature mixers and low-IF architectures reduce this, but only if I and Q are perfectly balanced. In practice, small gain and phase errors create incomplete cancellation. Many SDRs include software IQ correction that estimates and removes this imbalance. Learning to recognize image artifacts helps you avoid misinterpreting a mirrored signal as a real transmission.

Front-end filtering strategies vary widely. Some SDRs use switchable band-pass filters to keep strong out-of-band signals out. Others rely on a wide open front-end and hope the ADC can handle it. The cost difference is real: better filtering and higher dynamic range increase price. When you decode weak signals like NOAA APT or GPS, you often need a narrow filter or LNA. When you decode strong signals like FM broadcast, you want the opposite: keep gain low and avoid overload.

It is also useful to think in terms of frequency plans. If you have a signal at 1090 MHz and you set your LO at 1088 MHz, your IF becomes 2 MHz. You might then sample at 2.4 MSPS and digitally mix down to baseband. Each stage has bandwidth and filtering requirements. If any stage is too wide, noise and interferers leak through. If any stage is too narrow, you distort the signal. SDR success is often about choosing the right bandwidth at each step, not just about demodulation algorithms.

Calibration and self-test are underrated. A quick way to verify IQ correctness is to inject a known tone (from a signal generator or a nearby FM station) and confirm its location and mirror behavior. If the mirror is nearly equal in strength, your IQ imbalance is significant. Some SDR software applies a simple complex correction matrix to reduce this, effectively scaling and rotating I and Q to be orthogonal. You can implement this yourself by measuring the ratio of the image to the main tone and solving for gain/phase correction parameters.

Also note that front-end nonlinearities create intermodulation products. If two strong signals at f1 and f2 enter a nonlinear front-end, new signals appear at 2f1-f2 and 2f2-f1. These spurs can land directly inside the band you care about, making them indistinguishable from real signals. This is why you sometimes see “mystery” signals that move when you change gain. If a signal disappears when you reduce gain, it is likely an intermodulation product, not a real transmitter.

Finally, think about clocking and sample-rate stability. If your ADC clock drifts, your frequency axis shifts and your symbol timing drifts. This affects both analog audio quality and digital decoders. A GPSDO or disciplined reference stabilizes both frequency and timing. In low-cost SDRs, you often compensate with PPM correction and software resampling. This is not a hack; it is a practical requirement for stable decoding of narrowband digital signals.

Definitions & key terms

  • Downconversion: Shifting RF to a lower frequency using a mixer and LO.
  • LO (Local Oscillator): Frequency reference used for mixing.
  • Complex baseband: IQ representation at (or near) 0 Hz.
  • Noise figure: Measure of how much noise the receiver adds.
  • Dynamic range: Ratio between the largest and smallest measurable signals.

Mental model diagram

RF at 1090 MHz
    |
    v
[Mixer + LO @ 1088 MHz] -> IF at 2 MHz
    |
    v
[ADC 2.4 MSPS] -> IQ samples
    |
    v
[Digital tuning] -> ADS-B channel

How it works (step-by-step)

  1. Antenna captures RF energy across a band.
  2. Front-end filter rejects strong out-of-band signals.
  3. LNA amplifies the desired band with minimal added noise.
  4. Mixer shifts the band down using the LO.
  5. ADC samples the downconverted signal.
  6. IQ stream is delivered to software for processing.

Failure modes: clipping (too much gain), aliasing (too low sampling rate), DC spike (LO leakage), mirror images (IQ imbalance).

Minimal concrete example

# Convert interleaved uint8 IQ to complex float32 in [-1, 1]
import numpy as np
raw = np.fromfile("capture.iq", dtype=np.uint8)
I = raw[0::2].astype(np.float32) - 127.5
Q = raw[1::2].astype(np.float32) - 127.5
samples = (I + 1j * Q) / 127.5

Common misconceptions

  • “IQ is just stereo audio” -> IQ preserves phase; it is not two independent signals.
  • “Nyquist means sample at 2x carrier” -> Nyquist is about bandwidth, not center frequency.
  • “More gain always helps” -> too much gain causes clipping and intermodulation.

Check-your-understanding questions

  1. Why do we need I and Q instead of a single real sample stream?
  2. What causes a spike at 0 Hz in a spectrum?
  3. How does ADC bit depth affect weak signal decoding?

Check-your-understanding answers

  1. I and Q preserve phase and allow unique positive/negative frequency representation.
  2. DC offset or LO leakage in direct-conversion receivers.
  3. More bits improve dynamic range and lower quantization noise.

Real-world applications

  • Broadcast radio receivers
  • Aircraft and satellite tracking
  • Spectrum monitoring and interference hunting
  • Cellular and IoT signal analysis

How this fits on projects

This chapter explains the physical and sampling realities that every project depends on. In Projects 1-3 you validate IQ parsing, gain staging, and baseband tuning. In Projects 4-10 you rely on stable sampling and accurate frequency reference to make protocol decoders work at all.

Where you’ll apply it

Projects 1-10 (especially 1-3 and 10).

References

  • “Software-Defined Radio for Engineers” (Collins et al.)
  • “Understanding Digital Signal Processing” (Lyons)

Key insight: The RF front-end transforms high-frequency energy into a low-frequency complex stream. Everything after that is software.

Summary

This chapter explained the RF-to-IQ pipeline, receiver architectures, and real-world impairments. You should now understand why IQ is the foundation of SDR.

Homework / Exercises

  1. Capture 2 seconds of IQ data and plot I and Q separately.
  2. Change gain settings and observe how the noise floor shifts.
  3. Measure the DC spike in your spectrum and remove it in software.

Solutions

  1. I and Q should be centered around zero with Gaussian noise; strong carriers appear as sinusoids.
  2. Higher gain raises the noise floor and signals; clipping appears as flat-topped waveforms.
  3. Subtract the mean of I and Q or apply a high-pass filter near DC.

Fundamentals (read this first)

Spectrum analysis is how you see radio. The FFT converts time-domain IQ samples into a frequency-domain representation. That representation lets you identify signals, measure bandwidth, and spot interference. But the FFT is not magic; it has limitations. The frequency resolution is Fs / N, where Fs is the sample rate and N is the FFT size. A larger FFT gives finer resolution but slower updates. This is the core trade-off in spectrum displays.

Noise is everywhere. Thermal noise in a 1 Hz bandwidth is about -174 dBm at room temperature. When you increase bandwidth, noise power increases. This is why narrowband signals can appear above the noise floor with the same received power: the noise is lower in a narrow bandwidth. Understanding this relationship is essential when you compare a wide FM broadcast signal to a narrow POCSAG signal. The SDR waterfall shows a noise floor that depends on your sample rate, gain, and windowing.

You also need to understand decibels (dB) and how to convert power and voltage. Power ratios use 10*log10, voltage ratios use 20*log10. Spectrum displays often show dBFS (decibels relative to full scale). A signal at -6 dBFS is half the full-scale amplitude. That is a convenient internal measurement, but it is not an absolute physical power measurement unless you calibrate the receiver.

Windowing is another key concept. An FFT assumes the signal is periodic over the window. If the signal does not fit perfectly, energy leaks into neighboring bins. Windows like Hann or Blackman reduce leakage at the cost of widening the main lobe. In SDR, you almost always apply a window before the FFT to get a cleaner spectrum.

Finally, link budget thinking helps you estimate whether a decode is possible. You compare expected signal power to noise floor, subtract losses, and estimate SNR. Even if you do not calculate exact values, you should build the habit of asking: what is my SNR, what is the required SNR, and what can I do to improve it?

Another foundational idea is the difference between spectrum and spectral density. A raw FFT shows the energy in each bin, but the bin width depends on FFT size and sample rate. If you change the FFT size, the numerical values change even if the signal power is the same. This is why engineers often use power spectral density (PSD), which normalizes by bandwidth and yields units like dB/Hz. In SDR projects, you can usually ignore absolute calibration, but you should still be aware that dBFS values shift when you change FFT size or windowing.

The waterfall is essentially a time-frequency plot. It is most useful for signals that are intermittent or bursty (ADS-B, GSM). If a signal appears and disappears, a static spectrum might miss it, but a waterfall shows it clearly. The key trade-off is time resolution vs frequency resolution: short FFT windows give fast updates but coarse frequency bins; long windows give sharp frequency detail but smear short bursts.

Finally, measurement discipline matters. If you change gain, FFT size, or window type, you have changed the measurement. Make one change at a time and record it. This is the difference between debugging a decoder in an hour and being stuck for a week.

Deep Dive into the Concept

The FFT is a computationally efficient way to compute the Discrete Fourier Transform (DFT). For each FFT bin, you are effectively projecting the signal onto a complex sinusoid at that frequency. The magnitude of the bin indicates how much energy the signal has in that frequency band. The phase indicates relative timing. In practice, most SDR visualizations focus on magnitude only. This is fine for detection, but phase is crucial for demodulation.

FFT size controls the resolution bandwidth (RBW). If you sample at 2.4 MSPS with a 2048-point FFT, your RBW is about 1171 Hz. That means any narrowband signal narrower than 1 kHz will smear into a single bin. If you use a 8192-point FFT, your RBW is about 293 Hz, letting you resolve finer signals, but your waterfall updates slower. For a responsive UI, you need a compromise between time resolution and frequency resolution.

Noise is often modeled as a complex Gaussian random process. When you compute the FFT, the noise floor appears as a Rayleigh-distributed magnitude. Averaging multiple FFTs reduces noise variance and makes weak signals stand out. This is why many SDRs use an exponential moving average on the spectrum display. It improves visibility of persistent signals at the cost of responsiveness.

A key skill is to distinguish real signals from spurious artifacts. Common SDR artifacts include DC spikes, mirror images (IQ imbalance), and images caused by strong out-of-band signals. These are not just visual annoyances; they can mask real signals or create false detections. You should learn to recognize them quickly. A mirrored signal will appear symmetrically around center frequency. A DC spike appears exactly at 0 Hz and does not move when you tune.

dBFS vs dBm: dBFS is relative to the ADC full scale, while dBm is absolute power at the antenna. Converting between them requires calibration. If you do not have a calibrated front-end, you can still use dBFS as a relative measure to compare experiments. The key is to be consistent: use the same gain and sample rate when you compare measurements.

A simple link budget uses the Friis transmission equation, antenna gains, path loss, and receiver noise figure. Even if you never compute the full equation, you should at least reason qualitatively. If your noise floor is -60 dBFS and your signal peak is -50 dBFS, you have roughly 10 dB SNR. For many digital modes, 10 dB is adequate. For weak-signal GPS, you may need processing gain from correlation to recover signals below the noise floor.

You should also understand that bandwidth determines noise power. If you filter a 2.4 MHz capture down to 10 kHz, the noise power drops by 10*log10(2.4e6/1e4) = 23.8 dB. This is a dramatic improvement and explains why narrowband digital modes can be decoded even when the broadband waterfall looks noisy. Channelization and filtering are not optional; they are fundamental.

Spectral estimation has nuances that matter for real SDR data. Scalloping loss means that a tone located between FFT bin centers appears lower in amplitude than it really is. Windowing reduces leakage but also changes amplitude scaling. If you want accurate amplitude measurements (for example, to compare signal strengths), you must compensate for window loss. In practice, most SDR visualizations are qualitative, but for debugging decoders, relative measurements are still valuable if you are consistent.

There are also multiple ways to reduce noise variance in the spectrum. Welch’s method averages FFTs from overlapping windows, reducing variance at the cost of time resolution. Exponential averaging is simpler and good for live displays. In bursty systems like ADS-B, too much averaging can hide bursts; in steady systems like FM broadcast, averaging improves readability. Choosing averaging is not cosmetic; it changes detectability.

Finally, remember that some signals are wideband and structured (FM broadcast), while others are narrowband and bursty (ADS-B). Your analyzer should let you zoom and adjust RBW. A one-size FFT setting will misrepresent at least half the signals you care about. Learning to tune these parameters is part of SDR mastery.

A practical spectrum analyzer also needs dynamic range management. If you auto-scale the spectrum to the loudest signal, you may hide weaker signals. If you fix the scale, you may saturate the display. Many SDR tools provide both “auto” and “manual” scaling; learning when to use each is crucial. When hunting weak signals, reduce the sample rate, increase FFT size, and set a fixed dB scale. When monitoring for unknown signals, use auto-scale for quick visibility, then lock it down once you find something interesting.

Detection thresholds can be formalized. A common approach is CFAR (constant false alarm rate), which sets a threshold based on local noise statistics. While you may not implement CFAR in every project, the idea is powerful: do not use a fixed threshold for all conditions. Estimate the noise floor from nearby bins and set a threshold relative to it. This dramatically reduces false detections in noisy environments and is especially useful for ADS-B and burst detection.

Another subtlety is FFT scaling. Some libraries return unnormalized FFTs, others normalize by N. If you compare spectra across tools, you may see different absolute values even with identical data. For SDR learning, consistency matters more than absolute numbers. Pick one scaling convention and stick with it. If you later calibrate against a known signal source, you can convert dBFS to approximate dBm.

Finally, consider coherent vs non-coherent integration. If you average FFT magnitudes, you reduce noise variance but also lose phase information. If you average complex FFT bins coherently, you can improve detection of stable carriers even more, but only if you have accurate frequency alignment. This is why precise CFO correction can dramatically improve weak-signal detection: it enables coherent integration over longer times without smearing the signal.

Window choice also affects amplitude accuracy. A Blackman window reduces leakage but widens the main lobe; a Hann window is a common compromise. For detection, choose a window consistent with your goal and keep it consistent across measurements.

Definitions & key terms

  • FFT: Fast Fourier Transform, efficient DFT computation.
  • RBW: Resolution bandwidth (Fs/N).
  • dBFS: Decibels relative to ADC full scale.
  • Noise floor: Average noise level in the spectrum.
  • Link budget: Power accounting from transmitter to receiver.

Mental model diagram

Time domain IQ  --->  Window  --->  FFT  --->  Magnitude  --->  Waterfall
    ^                         ^                         ^
    |                         |                         |
  Samples                 Leakage                   dB scaling

How it works (step-by-step)

  1. Collect a block of IQ samples.
  2. Apply a window to reduce spectral leakage.
  3. Perform FFT to get frequency bins.
  4. Convert magnitude to dBFS.
  5. Average or smooth over time.

Minimal concrete example

import numpy as np

def spectrum(samples, fs, nfft=2048):
    window = np.hanning(nfft)
    x = samples[:nfft] * window
    X = np.fft.fftshift(np.fft.fft(x))
    mag = 20*np.log10(np.abs(X) + 1e-12)
    freqs = np.fft.fftshift(np.fft.fftfreq(nfft, d=1/fs))
    return freqs, mag

Common misconceptions

  • “FFT shows exact power” -> Without calibration, it is relative, not absolute.
  • “Noise floor is constant” -> It changes with bandwidth, gain, and windowing.
  • “Larger FFT always better” -> It improves resolution but reduces update rate.

Check-your-understanding questions

  1. If you double FFT size, what happens to RBW?
  2. Why does filtering reduce noise power?
  3. What causes spectral leakage?

Check-your-understanding answers

  1. RBW halves (better frequency resolution).
  2. Noise power scales with bandwidth; narrower bandwidth reduces it.
  3. Finite observation window truncates the signal, spreading energy.

Real-world applications

  • Spectrum monitoring and interference hunting
  • Measuring channel occupancy and bandwidth
  • Detecting hidden digital subcarriers

How this fits on projects

Projects 1, 4, 5, and 7 depend on your ability to see signals clearly and measure their bandwidth. You will use these concepts to set FFT sizes, select filter widths, and detect bursts that would otherwise be invisible.

Where you’ll apply it

Projects 1-10 (especially 1, 4, 5, 7, 10).

References

  • “Understanding Digital Signal Processing” (Lyons)
  • “Digital Signal Processing: A Practical Guide for Engineers and Scientists” (Smith)

Key insight: The FFT is your microscope. Learn its resolution limits and noise behavior or you will misinterpret what you see.

Summary

You now understand how FFT-based spectrum analysis works, why noise floor shifts, and how to use RBW and windowing to interpret signals correctly.

Homework / Exercises

  1. Plot spectra with FFT sizes 1024, 4096, 16384 and compare RBW.
  2. Capture a signal and compare spectra with rectangular vs Hann windows.
  3. Measure noise floor change after filtering a 2.4 MHz capture to 12 kHz.

Solutions

  1. Larger FFT yields finer bin spacing but slower updates.
  2. Hann reduces leakage but broadens the main lobe.
  3. Noise floor should drop by about 23-24 dB.

Chapter 3: Digital Filtering, Resampling, and Channelization

Fundamentals (read this first)

Filtering is how you isolate a signal. In SDR, you rarely process the full wideband capture directly. Instead, you filter to extract the narrow channel of interest. A low-pass filter keeps frequencies near DC; a band-pass keeps a specific band; a high-pass removes DC offsets and slow drifts.

Resampling and decimation reduce sample rate after filtering. You never downsample without filtering; if you do, higher-frequency components alias into your band. The proper workflow is: shift the signal to baseband, low-pass filter, then decimate. This is the backbone of almost every SDR pipeline.

Channelization is applying multiple filters to split a wide capture into multiple channels. The concept is the same whether you build a two-channel ADS-B receiver or a 64-channel wideband spectrum monitor. Once you understand how to build one channel, you can scale it.

Filters come in two main types: FIR and IIR. FIR filters are stable and can be designed with linear phase, which preserves waveform shape. IIR filters are more computationally efficient but can introduce phase distortion. For most SDR decoders, FIR filters are preferred unless you have tight performance constraints.

Filter specifications are the language of practical SDR: passband, stopband, transition width, ripple, and attenuation. A wide transition band means fewer taps and lower CPU usage, but it also means you may leak adjacent channels or noise. A narrow transition band means more taps and higher CPU load, but better isolation. The design choice depends on the signal. For ADS-B, you can be forgiving; for RDS or GPS, you need tighter filtering to isolate weak components.

Group delay is another key concept. Linear-phase FIR filters delay the signal by half the filter length. In audio, that is fine. In bursty digital systems, excessive delay can misalign bursts with your detection windows. This is not usually fatal, but it affects how you design preamble detection and buffering.

Finally, filters are not only about isolating channels. They are also about shaping signals. For example, matched filters maximize SNR for digital symbols. Raised-cosine filters shape spectra to reduce intersymbol interference. Even if you do not implement these from scratch, you should recognize when a protocol expects them.

In practical SDR work, filtering is often the first place you gain or lose performance. If your filter is too wide, you admit noise and adjacent channels; if it is too narrow, you distort the signal. This trade-off shows up immediately in audio quality and in digital error rates. The best way to build intuition is to design a few filters with different transition widths and listen to or decode the results. The differences are not subtle once you know what to listen for.

Another practical concept is streaming vs block filtering. Many DSP libraries implement filters as block operations, but SDR pipelines are streaming: you process thousands of small chunks continuously. If you reset filter state at each block, you introduce artifacts. You must maintain filter state between blocks. This is a common source of bugs in beginner SDR code and one of the fastest ways to create audible clicks or bit errors.

Deep Dive into the Concept

A digital filter is defined by its impulse response h[n]. Filtering is convolution: y[n] = x[n] * h[n]. Convolution in time is multiplication in frequency. This is why filter design can be done by shaping a frequency response and then computing its impulse response. For example, a low-pass filter can be designed by truncating a sinc function and applying a window. The resulting FIR filter has a predictable cutoff and stopband attenuation.

Decimation reduces the sample rate by an integer factor M. But if you decimate without filtering, anything above Fs/(2M) will alias into the new band. That is why every decimator is preceded by a low-pass filter with cutoff at or below the new Nyquist frequency. A multistage decimator performs decimation in steps (e.g., 2x then 3x) to reduce computational cost. This is extremely common in SDR.

Resampling by non-integer factors requires interpolation and filtering. For example, converting 2.4 MSPS to 48 kSPS is a ratio of 50:1. You can do this in stages (e.g., 2.4M -> 240k -> 48k). Each stage requires filtering. Polyphase filters and rational resamplers are standard tools for these operations.

CIC filters (cascaded integrator-comb) are efficient for large integer decimation factors. They are used in hardware DDCs but have poor passband flatness, so they are often followed by a compensation filter. Understanding these structures helps you interpret why some SDR hardware outputs have slight amplitude droop across the band.

Channelization can be done with a bank of filters, or more efficiently with a polyphase FFT filter bank. For learning, start with a single channel pipeline. Once it works, you can replicate it or explore polyphase techniques. The conceptual model remains the same: shift, filter, decimate.

Filter design involves trade-offs: transition width, stopband attenuation, and computational cost. A filter with a sharp transition requires more taps (longer impulse response). Longer filters provide better selectivity but cost more CPU. Practical SDR requires you to balance these factors. For FM broadcast (200 kHz wide), you can use a relatively short filter. For RDS (1.2 kbps), you can afford a longer filter because the sample rate is lower.

Design techniques matter. Windowed-sinc FIR design is simple and often good enough for SDR, but it yields a fixed compromise between passband ripple and stopband attenuation. Parks-McClellan (equiripple) design can achieve sharper transitions for a given number of taps. If you are CPU-bound, equiripple filters can be a practical win. In learning projects, windowed-sinc is easier to implement and reason about, which is why many SDR tutorials use it.

Rational resampling can be implemented by upsampling (inserting zeros), filtering, and downsampling. But doing this naively is expensive. Polyphase filtering reorganizes computation so you only compute the samples you need. Libraries like scipy.signal.resample_poly do this for you, but understanding the concept helps you debug resampling artifacts (e.g., scalloped frequency responses or strange amplitude droop).

Channelization at scale uses polyphase filter banks (PFB) combined with FFTs. A PFB splits a wideband signal into many subbands with good isolation. While this guide focuses on single-channel pipelines, the mental model transfers directly to wideband SDR systems such as spectrum monitors or trunked radio receivers.

Finally, always verify your filters. Plot the frequency response, measure passband ripple, and check stopband attenuation. If you are seeing aliasing or adjacent-channel interference, your filter is likely too short or your decimation factor is too aggressive. Debugging filters is not glamorous, but it is often the difference between a working decoder and a frustrating failure.

Resampling also introduces fractional delay issues. If you resample to an odd rate, the exact placement of symbol boundaries can shift slightly. This can matter for burst protocols and for correlation-based systems like GPS. Many DSP libraries include fractional delay filters to align timing; if your symbols look smeared after resampling, consider adding a small fractional delay correction or choosing a resampling ratio that aligns better with the symbol rate.

Finally, remember that filters can distort phase. For demodulation tasks where phase matters (PSK, QAM), linear phase is important. For amplitude-only tasks (ADS-B magnitude), phase is less critical. Your filter choice should reflect the signal type.

For very long filters, direct convolution becomes expensive. A common optimization is FFT-based convolution (overlap-save or overlap-add). Instead of convolving in time, you multiply in the frequency domain, which can be much faster for large filters. SDR frameworks like GNU Radio often use FFT-based filtering for wideband channelization. You do not need to implement this from scratch for these projects, but understanding it helps you reason about latency and CPU usage in real systems.

Another subtle issue is filter startup and transients. When you begin filtering a stream, the filter has no history, so the first few outputs are less accurate. In burst systems, this can overlap with the preamble and cause missed detections. A practical fix is to discard a short warm-up period or pre-fill the filter state with zeros or repeated samples. If you see occasional errors at the start of bursts, this is often the cause.

CIC filters deserve extra attention because many SDRs use them internally. A CIC filter is extremely efficient, but it has significant passband droop. If you rely on a hardware decimator, your signal amplitude may be lower at the band edges than at the center. This can distort wideband FM audio or reduce RDS subcarrier strength. A simple compensation FIR can restore a flatter passband, which improves decoding consistency across the channel.

Decimation factors are constrained by your target symbol rate. If you plan to decode a 1.2 kbps signal, you should choose a sample rate that yields a small integer number of samples per symbol (e.g., 8 or 16). This simplifies timing recovery. If you choose an awkward sample rate, you will be forced to use fractional resampling or more complex timing recovery. Planning the sample rate early saves you a lot of downstream complexity.

Definitions & key terms

  • FIR filter: Finite impulse response, typically linear phase.
  • IIR filter: Infinite impulse response, efficient but nonlinear phase.
  • Decimation: Reducing sample rate after filtering.
  • Interpolation: Increasing sample rate by inserting samples.
  • Polyphase: Efficient multirate filtering technique.

Mental model diagram

Wideband IQ
    |
    v
[Shift to baseband] -> [Low-pass filter] -> [Decimate] -> [Narrowband channel]

How it works (step-by-step)

  1. Multiply by a complex exponential to shift the target channel to DC.
  2. Apply a low-pass FIR filter with cutoff at desired bandwidth.
  3. Decimate by an integer factor.
  4. Optionally resample to a convenient rate (e.g., 48 kHz).

Minimal concrete example

import numpy as np
from scipy.signal import firwin, lfilter

def channelize(samples, fs, f_center, bw, decim):
    t = np.arange(len(samples)) / fs
    mixed = samples * np.exp(-1j*2*np.pi*f_center*t)
    taps = firwin(129, bw/(fs/2))
    filtered = lfilter(taps, 1.0, mixed)
    return filtered[::decim]

Common misconceptions

  • “I can downsample without filtering” -> No, aliasing will destroy the channel.
  • “IIR is always better” -> IIR saves CPU but can distort phase.
  • “Filters are one-size-fits-all” -> Filter parameters depend on signal bandwidth.

Check-your-understanding questions

  1. Why must you filter before decimation?
  2. What does a longer FIR filter give you?
  3. Why do linear-phase filters matter for PSK?

Check-your-understanding answers

  1. To prevent aliasing of higher frequencies into the new band.
  2. Sharper transition bands and better stopband attenuation.
  3. PSK relies on phase accuracy; nonlinear phase distorts symbols.

Real-world applications

  • Extracting RDS subcarrier from FM baseband
  • Narrowband AIS and POCSAG decoding
  • GPS channel extraction before correlation

How this fits on projects

Filtering and resampling are the glue between raw IQ and usable channels. Every project after the waterfall uses these steps to isolate a signal, reduce noise, and match the sample rate to the demodulator.

Where you’ll apply it

Projects 2-10 (especially 5, 6, 8, 10).

References

  • “Understanding Digital Signal Processing” (Lyons)
  • “Digital Signal Processing: A Practical Guide for Engineers and Scientists” (Smith)

Key insight: Filtering + decimation is the heart of SDR. It turns messy wideband captures into clean channels you can demodulate.

Summary

You now understand how to design and apply filters, how to decimate safely, and why multirate DSP is central to SDR pipelines.

Homework / Exercises

  1. Design a 15 kHz low-pass FIR filter and apply it to an FM audio stream.
  2. Compare CPU usage between a 129-tap and a 513-tap FIR filter.
  3. Implement a two-stage decimator (2x then 5x) and compare output quality.

Solutions

  1. The filtered audio should be clear without hiss above 15 kHz.
  2. Longer FIR filters cost more CPU but reduce aliasing.
  3. Two-stage decimation reduces CPU cost and yields similar quality.

Chapter 4: Modulation and Demodulation (Analog and Digital)

Fundamentals (read this first)

Modulation is how information rides on a carrier. Analog modulation (AM, FM, PM) varies amplitude or frequency with the message. Digital modulation (FSK, PSK, QAM, GMSK) maps symbols onto amplitude/phase changes. SDR lets you decode many modulations because the basic building blocks are similar: filtering, phase extraction, and symbol slicing.

AM is simple: the envelope of the signal is the message. FM encodes the message in the rate of phase rotation; demodulation is essentially differentiation of phase. PM encodes the message directly in phase. Digital schemes like FSK and PSK are discrete versions of these ideas. GMSK is a continuous-phase FSK with Gaussian filtering, used in GSM and AIS because it has good spectral efficiency and constant envelope.

Understanding modulation types is essential because each protocol uses a specific one. ADS-B uses pulse-position modulation; RDS uses BPSK on a subcarrier; AIS uses GMSK; POCSAG uses 2-FSK; GSM uses GMSK for control channels and higher-order PSK for some data modes; GPS uses BPSK with spread spectrum codes.

Bandwidth is not arbitrary; it is tied to modulation. FM bandwidth depends on deviation and message bandwidth (Carson’s rule). FSK bandwidth depends on tone spacing and symbol rate. PSK bandwidth depends on symbol rate and pulse shaping. If you do not choose filters based on modulation, you will either cut off useful signal energy or admit too much noise. The result is poor decode performance even if your demodulator is correct.

Another important idea is the constellation. Digital modulation maps symbols to points in the IQ plane. BPSK uses two points, QPSK uses four, QAM uses grids. The constellation tells you what noise does to decisions: points closer together require higher SNR. When you look at a constellation plot, you are literally seeing the quality of your synchronization and noise environment.

Finally, note that modulation is often layered. FM broadcast carries audio, but it also carries a stereo pilot at 19 kHz and RDS at 57 kHz. GSM uses GMSK at the physical layer but then applies coding and framing. A real SDR receiver must treat modulation as one layer in a stack, not the entire system.

It is also useful to connect modulation to spectral efficiency. Higher-order modulations like QAM carry more bits per symbol but require higher SNR and more linear amplification. Lower-order modulations (BPSK, GMSK) are more robust and tolerate nonlinear amplifiers. This is why safety-critical systems (navigation, control channels) often choose robust modulations, while high-throughput systems choose dense constellations. Understanding this trade-off helps you predict what you can decode with a low-cost SDR in real RF conditions.

Modulation also determines what filtering you need. A pulse-shaped digital signal spreads energy into sidelobes; if you filter too tightly you introduce intersymbol interference. If you filter too loosely you admit extra noise. This is why many standards specify spectral masks and pulse shaping filters. Even if you do not implement the exact mask, you should respect the expected bandwidth when you design your filters.

Deep Dive into the Concept

AM: In amplitude modulation, the transmitted signal is s(t) = (1 + m(t)) * cos(2*pi*f_c*t). The message m(t) changes the amplitude of the carrier. Demodulation is envelope detection. In SDR, you compute the magnitude of the IQ samples: sqrt(I^2 + Q^2). That yields the envelope. You then remove DC and low-pass filter to get audio. AM is sensitive to noise because noise directly perturbs amplitude.

FM: In frequency modulation, the instantaneous frequency is f_c + k_f * m(t). The phase is the integral of frequency. Demodulation in SDR is commonly done using the conjugate product method: y[n] = angle(x[n] * conj(x[n-1])). This gives an estimate of instantaneous frequency. FM is more robust to amplitude noise but sensitive to frequency errors and requires de-emphasis filtering.

FSK: Frequency-shift keying uses discrete frequency shifts to represent bits. A simple 2-FSK uses two tones. You can demodulate it by frequency discrimination (FM demod) followed by slicing. More robust approaches use matched filters for each tone or go to baseband and use a PLL. POCSAG uses 2-FSK with multiple baud rates; symbol timing recovery is essential.

PSK: Phase-shift keying encodes bits as discrete phase changes. BPSK uses two phases (0 and pi). QPSK uses four. Demodulation requires carrier recovery because phase must be referenced. This is where Costas loops and PLLs come in. RDS uses BPSK on a subcarrier, and GPS uses BPSK on a spread-spectrum code.

QAM: Quadrature amplitude modulation encodes bits on both amplitude and phase. It requires high SNR and linear amplification. While not used in the core projects here, understanding QAM helps you interpret modern digital radio systems.

GMSK: Gaussian Minimum Shift Keying is continuous-phase FSK with a Gaussian filter. It has a constant envelope and compact spectrum. GSM and AIS use GMSK because it is efficient and tolerant of nonlinear amplifiers. Demodulation can be done by FM discrimination and symbol slicing, but better performance uses matched filtering and maximum-likelihood sequence estimation (MLSE). For learning, start with FM demod + timing recovery.

Pulse Position Modulation (PPM): ADS-B uses pulses at fixed positions within a microsecond grid. It does not care about phase, only energy timing. This allows magnitude-only detection. PPM is extremely sensitive to timing and sampling rate, which is why ADS-B decoders require high sample rates and careful preamble detection.

When you design a demodulator, always ask: is the information in amplitude, phase, or frequency? Then build a pipeline that extracts that component. The rest is noise handling and synchronization.

Analog demodulation benefits from simple but correct details. For AM, a low-pass filter after envelope detection removes residual carrier ripple and high-frequency noise. For FM broadcast, de-emphasis is mandatory to undo pre-emphasis at the transmitter. In the US the time constant is 75 us; in many other regions it is 50 us. This is why FM audio sounds “tinny” without de-emphasis.

Digital demodulation often starts with a matched filter. For PSK and QAM, a root-raised-cosine (RRC) filter maximizes SNR and minimizes intersymbol interference. For FSK, matched filtering to each tone can improve performance, but a simple discriminator is often good enough if the SNR is high. In practice, SDR beginners frequently skip matched filtering and still decode strong signals, but weak signals require proper filtering and timing recovery.

Symbol decision is not just thresholding; it is statistical classification. A good slicer chooses the symbol that minimizes distance in the IQ plane. Soft decisions (probability estimates) improve FEC decoding, especially for convolutional codes. GSM decoders often use soft bits for Viterbi, which is why amplitude normalization and noise estimation matter.

Finally, beware of non-idealities. If your signal has frequency offset or phase noise, the constellation rotates or smears. If your timing recovery is off, symbols smear along the time axis. These effects are visible in constellation plots and eye diagrams. Learning to read these visuals is a high-leverage skill for SDR debugging.

There are also modulation-specific details that are easy to miss. FM broadcast uses a multiplexed baseband: mono audio (0-15 kHz), stereo difference around a 38 kHz subcarrier, a 19 kHz pilot, and RDS at 57 kHz. If you ignore that structure, you can still decode mono audio but you will miss metadata and stereo. Likewise, GPS BPSK is spread spectrum; you cannot detect it with a simple FFT because the energy is spread across a wide band. You must correlate with the PRN code to recover it. These are examples where modulation knowledge directly determines the algorithm you choose.

When you implement demodulators, verify them with synthetic signals. Generate known AM, FM, FSK, and BPSK waveforms in software, add noise, and then test your demod chain. This isolates algorithm bugs from RF capture issues. Once your demod works on synthetic data, you can debug real RF issues (gain, offset, filtering) with confidence.

Bandwidth prediction is also practical. Carson’s rule estimates FM bandwidth as 2*(deviation + message bandwidth). For FM broadcast, that gives roughly 180 kHz, which explains the common 200 kHz channel width. For FSK, bandwidth depends on deviation and symbol rate; larger deviation gives better noise immunity but consumes more spectrum. These relationships help you choose filter cutoffs and decide whether a signal is likely to fit within your captured bandwidth.

Differential encoding is another detail that appears often. In systems like RDS, the absolute phase can flip without warning, so the transmitter encodes information as phase differences between symbols. The receiver must undo this with differential decoding. If you ignore this step, your bitstream will be inverted randomly even if the Costas loop is perfect. This is a common failure mode for first-time RDS decoders.

Symbol timing interacts with modulation quality. If you sample too early or too late, you introduce intersymbol interference even with perfect filtering. This is why eye diagrams are so useful: they show whether your sampling phase is centered in the eye opening. A clean eye means your demodulator has a chance; a closed eye means decoding will fail regardless of CRC or FEC.

Definitions & key terms

  • AM: Amplitude modulation; envelope carries information.
  • FM: Frequency modulation; phase rotation speed carries information.
  • FSK: Frequency-shift keying; symbols are discrete frequencies.
  • PSK: Phase-shift keying; symbols are discrete phases.
  • GMSK: Gaussian-filtered MSK; constant envelope.
  • PPM: Pulse-position modulation; timing carries information.

Mental model diagram

Modulated IQ -> Demodulator -> Baseband symbols -> Slicer -> Bits
       |              |                 |            |
       v              v                 v            v
    amplitude       phase           timing      bit decisions

How it works (step-by-step)

  1. Filter the signal to the channel bandwidth.
  2. Apply the appropriate demodulation (envelope, phase, frequency).
  3. Normalize and remove DC.
  4. Recover symbol timing if digital.
  5. Slice symbols into bits.

Minimal concrete example

# FM demod via conjugate product
import numpy as np

def fm_demod(x):
    return np.angle(x[1:] * np.conj(x[:-1]))

Common misconceptions

  • “AM demod is just absolute value” -> You still need filtering and DC removal.
  • “FM demod is always atan2” -> Conjugate product is cheaper and effective.
  • “Digital modulation is just thresholding” -> You must recover timing and phase.

Check-your-understanding questions

  1. Why does FM demodulation require phase differentiation?
  2. Why is GMSK preferred over plain FSK in GSM?
  3. Why does ADS-B ignore IQ phase?

Check-your-understanding answers

  1. FM encodes the message in instantaneous frequency, which is the derivative of phase.
  2. GMSK is spectrally compact and constant-envelope, good for power amplifiers.
  3. ADS-B uses pulse timing; phase is not informative.

Real-world applications

  • AM and FM broadcast receivers
  • Digital paging and maritime tracking
  • Cellular control channel decoding

How this fits on projects

This chapter maps directly to Projects 2-9. Each project uses a specific modulation, and your demodulator choice (envelope, discriminator, Costas loop, slicer) determines whether you can recover bits or audio reliably.

Where you’ll apply it

Projects 2-9.

References

  • “Understanding Digital Signal Processing” (Lyons)
  • “Digital Communications” (Proakis)
  • “Software-Defined Radio for Engineers” (Collins et al.)

Key insight: Demodulation is just extracting the correct signal dimension (amplitude, phase, or frequency), then stabilizing it with synchronization.

Summary

This chapter connected modulation types to practical SDR demodulators. You now know which algorithms are needed for each project.

Homework / Exercises

  1. Implement AM demod on a synthetic AM signal and plot the envelope.
  2. Implement FM demod on a synthetic FM signal and compare with input.
  3. Generate BPSK symbols and plot them on the IQ plane.

Solutions

  1. Envelope should match the modulating signal after low-pass filtering.
  2. FM demod output should correlate with the original message.
  3. BPSK symbols should appear at two opposite points on the IQ plane.

Chapter 5: Synchronization, Tracking, and Estimation

Fundamentals (read this first)

Synchronization is the difference between a noisy waveform and correct bits. A receiver must recover when symbols occur (timing), where the carrier is (frequency), and what the phase reference is (phase). Without these, even perfect demodulation fails. Synchronization is the most common reason SDR projects fail.

Carrier frequency offset (CFO) arises because the transmitter and receiver oscillators differ. A few parts-per-million error can create kilohertz offsets at RF. Timing recovery deals with the fact that your sampling clock does not align with the symbol boundaries. Phase recovery deals with arbitrary phase rotations introduced by the channel or oscillator.

Synchronization is often implemented with loops: phase-locked loops (PLL), delay-locked loops (DLL), and Costas loops. These are feedback systems that estimate and correct errors over time. Understanding their bandwidth and stability is essential for reliable decoding.

Oscillator stability sets the baseline difficulty. A cheap SDR with a 20 PPM oscillator can drift by kilohertz at RF, which is enough to break narrowband digital decoding. Temperature changes and USB power fluctuations can shift frequency during a capture. This is why many SDR workflows include a “PPM correction” step. For high-precision work, you may need a TCXO or an external 10 MHz reference.

Channel variation matters too. Signals can fade or multipath can distort phase and amplitude. In mobile or satellite contexts, Doppler can change rapidly. Synchronization loops must be designed to track these changes without chasing noise. The result is a delicate balance: too slow and you lose lock, too fast and you become unstable. This is the core engineering challenge of synchronization.

Sampling strategy is part of synchronization. If you sample at 2 or 4 samples per symbol, timing recovery has little margin. If you sample at 8 or 16 samples per symbol, timing recovery is easier but CPU cost rises. Many SDR designs choose a moderate oversampling factor so that timing recovery algorithms have enough resolution to adjust without excessive computation.

Think of synchronization as a three-part problem: frequency, phase, and timing. Frequency offset causes the constellation to rotate. Phase offset shifts the decision boundaries. Timing offset makes samples land between symbols, reducing eye opening. A receiver that is “almost locked” may appear to work on strong signals and fail on weak ones, which is why sync issues are often intermittent and frustrating.

Visualization helps. The eye diagram is a simple plot of multiple symbol intervals overlaid. A wide open eye means timing is correct and noise is low. A closed eye indicates timing or filtering problems. Constellation plots show phase and amplitude errors directly. When you are stuck, plot one of these; they reveal problems faster than raw IQ plots.

Finally, not all synchronization is continuous. Some systems transmit bursts (ADS-B, GSM). You often do a coarse acquisition (find the burst timing and CFO) and then do a short tracking loop across the burst. That is a different problem than continuous broadcast signals like FM or RDS, where a long-running PLL makes sense.

Deep Dive into the Concept

Carrier recovery: For PSK/QAM, you must align your receiver’s phase reference to the transmitter. A Costas loop is a specialized PLL that works for suppressed-carrier signals like BPSK and QPSK. It produces an error signal based on the product of I and Q components and adjusts the NCO to lock phase. If the loop bandwidth is too narrow, it will not track drift; if too wide, it will track noise.

Timing recovery: Symbol timing can be recovered using early-late gate, Gardner, or Mueller and Muller algorithms. These methods look at samples around the expected symbol boundary and adjust timing to maximize eye opening. In practice, you can start with a simple approach: oversample and use a peak-picking or zero-crossing method. For robust decoders (RDS, AIS), proper timing recovery is essential.

AGC (Automatic Gain Control): Gain must be stable for slicing. AGC measures average amplitude and applies a scaling factor. Too fast AGC can distort signal dynamics; too slow AGC can fail to track fading. In SDR decoders, a simple RMS-based gain normalization often works.

Doppler: For satellite signals (NOAA APT, GPS), Doppler shifts can be several kHz. This must be corrected, either by tracking frequency with a PLL or by applying a Doppler model based on orbital predictions. For GPS, Doppler is part of the acquisition search.

Burst synchronization: Some protocols (ADS-B, GSM) transmit bursts. You must detect preambles and align to them. This is often done with correlation against a known preamble. A detection threshold must balance missed detections against false positives. A practical receiver adapts thresholds based on noise estimates.

Phase noise and jitter: Oscillator instability manifests as phase noise, which broadens signals and makes carrier recovery harder. It is most noticeable on narrowband digital signals. Better oscillators (TCXO, OCXO, GPSDO) reduce this effect.

A key mental model is that synchronization is estimation. You are constantly estimating frequency, phase, and timing based on noisy data. Robust SDR design is about making these estimators stable and reliable under real-world conditions.

PLL design is a control-systems problem. A loop has a natural frequency and damping factor. If the loop is too slow, it will not track drift; if too fast, it will track noise. In practice, you choose a loop bandwidth based on expected drift and SNR. For example, GPS tracking loops are narrow because signals are weak and stable; GSM loops can be wider because bursts are short and SNR is higher.

Timing recovery algorithms differ in assumptions. Early-late gate is simple but needs a known pulse shape. Gardner works well for symmetric pulses and does not require a carrier reference. Mueller and Muller is popular for BPSK and QPSK because it uses decision-directed error estimates. The right choice depends on modulation and SNR. You can start with oversampling and a simple peak detector, but for robust decoding you will eventually need a real timing recovery loop.

Acquisition and tracking are distinct. Acquisition searches a grid of frequency and timing hypotheses to find a coarse lock. Tracking refines that estimate over time. GPS is the classic example: acquisition searches Doppler bins and code phases; tracking uses DLL and PLL to maintain lock. Understanding this distinction helps you design efficient decoders that do not waste CPU after lock is achieved.

Lock detection is another practical issue. A PLL can appear to be locked when it is not, especially in noisy conditions. Many receivers use lock detectors that monitor error variance or check whether the loop has converged. In SDR projects, you can implement a simple lock detector by measuring the variance of the phase error over time. If it is small and stable, you are likely locked. This helps you avoid decoding garbage when sync is not actually achieved.

Coarse-to-fine strategies improve robustness. A coarse estimator might use an FFT peak to estimate CFO within a few hundred Hz. A fine estimator then uses a PLL to remove the remaining offset. Similarly, a rough timing estimate can be obtained by correlating against a known preamble, then refined with a timing recovery loop. This layered approach is common in professional receivers because it balances speed and accuracy.

Loop tuning is not guesswork. A standard approach is to choose a loop noise bandwidth based on expected dynamics, then compute loop filter coefficients for a chosen damping factor (often around 0.7). This gives you stable tracking without excessive overshoot. In SDR learning projects, you can use published formulas or existing libraries to compute these coefficients. The key is to treat the loop as a control system, not as a mystery box.

Finally, remember that synchronization and demodulation are tightly coupled. If your demodulator produces a noisy error signal, your PLL will struggle. If your PLL is unstable, your slicer will produce wrong bits. This is why debugging often alternates between improving demod quality (filtering, AGC) and improving loop stability. The two are inseparable in practice.

Training sequences and pilots are practical synchronization aids. GSM bursts include training sequences designed for correlation-based timing alignment. RDS uses the 19 kHz pilot in FM broadcasts as a phase reference. When a protocol provides a pilot, use it; it simplifies synchronization dramatically. When it does not, you rely on blind techniques (like Gardner timing recovery), which are more sensitive to noise and require better filtering.

Acquisition grid resolution matters. If your Doppler bin spacing is too coarse, you may miss a weak signal entirely; if it is too fine, acquisition becomes slow. The same trade-off exists for timing search. A common practice is to start coarse, then refine around the strongest candidates. This keeps CPU usage reasonable while still providing reliable lock.

Finally, remember that synchronization affects error correction. If your timing is off by a fraction of a symbol, your BER increases and your CRC or BCH starts failing. Many “CRC errors” are actually timing errors. This is why SDR debugging often starts by improving sync before touching the decoder logic.

Definitions & key terms

  • CFO: Carrier frequency offset.
  • PLL: Phase-locked loop for phase/frequency tracking.
  • DLL: Delay-locked loop for code timing.
  • Costas loop: PLL variant for suppressed-carrier signals.
  • AGC: Automatic gain control.

Mental model diagram

Received signal -> Estimator -> NCO/Clock correction -> Locked signal
         ^             |                 |
         |             v                 v
      Errors <------ Feedback <---- Timing/Phase adjust

How it works (step-by-step)

  1. Estimate frequency offset using FFT peak or phase difference.
  2. Mix by an NCO to correct CFO.
  3. Run timing recovery to align symbols.
  4. Run phase recovery (PLL/Costas) if needed.
  5. Normalize amplitude with AGC.

Minimal concrete example

# Simple CFO estimator using phase difference
import numpy as np

def estimate_cfo(samples, fs):
    phase_diff = np.angle(samples[1:] * np.conj(samples[:-1]))
    return np.mean(phase_diff) * fs / (2*np.pi)

Common misconceptions

  • “Sync is optional” -> Without sync, digital decoders fail.
  • “PLL always locks” -> Loop bandwidth and SNR determine lock.
  • “Timing recovery is just resampling” -> It is a feedback estimation problem.

Check-your-understanding questions

  1. Why does CFO break BPSK decoding?
  2. What happens if PLL bandwidth is too wide?
  3. Why is Doppler important for satellite signals?

Check-your-understanding answers

  1. BPSK symbols rotate in the IQ plane, causing bit errors.
  2. The PLL tracks noise and becomes unstable.
  3. Doppler shifts the signal outside the narrowband filter.

Real-world applications

  • GPS acquisition and tracking
  • RDS BPSK demodulation
  • GSM burst synchronization

How this fits on projects

Synchronization is the hidden backbone of Projects 4-10. Every time a decoder fails CRC or loses lock, the fix is usually here: better CFO correction, timing recovery, or loop tuning.

Where you’ll apply it

Projects 4-10 (especially 5, 7, 9, 10).

References

  • “Understanding Digital Signal Processing” (Lyons)
  • “Digital Communications” (Proakis)

Key insight: Synchronization is estimation under noise. Robust decoding depends more on sync than on demodulator math.

Summary

You now understand how carrier, timing, and gain synchronization work, and why they dominate real-world SDR reliability.

Homework / Exercises

  1. Implement a simple CFO estimator and apply it to a detuned FM signal.
  2. Implement a moving-average AGC and compare before/after amplitudes.
  3. Simulate BPSK with a 100 Hz offset and show decoding failure without PLL.

Solutions

  1. Correcting CFO should align the spectrum to center.
  2. AGC should normalize amplitude while preserving phase.
  3. Without PLL, constellation rotates; with PLL, it stabilizes.

Chapter 6: Framing, Error Detection/Correction, and Protocol Decoding

Fundamentals (read this first)

A demodulator gives you bits. But bits are meaningless without framing. Protocols define how bits are grouped into messages, how boundaries are detected, and how errors are detected or corrected. SDR decoders must implement these layers: preamble detection, sync words, bit stuffing, CRC checks, and forward error correction (FEC).

ADS-B uses a fixed preamble and 112-bit frames with CRC. AIS uses GMSK with NRZI encoding and HDLC-style framing. RDS uses group structures and parity checks. POCSAG uses a preamble and BCH error correction. GSM uses convolutional codes and interleaving. GPS uses spread-spectrum correlation and navigation data frames. These are all examples of the same principle: raw bits are structured and protected.

In practice, framing errors are the most common reason a decoder “almost works”. You might detect bits but fail to align them, or you might align but use the wrong bit order (MSB vs LSB). Many protocols also use scramblers or whitening to avoid long runs of zeros or ones. If you forget to de-scramble, the payload will look random even though your demodulator is correct. The simplest debugging strategy is to validate each stage: preamble detection, bit extraction, error check, then field parsing.

Another important distinction is between continuous and burst protocols. Continuous protocols (RDS, AIS) can use streaming decoders with sliding windows. Burst protocols (ADS-B, GSM) require detection of packet boundaries and rapid lock within a short time. This changes how you design buffers and detection thresholds. A burst decoder must be fast and must fail quickly when frames are invalid; otherwise it falls behind real-time data.

Protocols also define bit ordering and field packing. Some transmit most-significant bit first, others least-significant. Some fields are signed, some are unsigned, some are encoded (like Gray code). A correct decoder is as much about reading the spec carefully as it is about DSP. If your parsed values are off by powers of two or appear negative when they should not, you likely have a bit-order issue, not an RF issue.

Finally, many systems use layered framing. For example, a physical layer frame might contain a link layer packet that itself contains a message format. ADS-B frames contain type codes that change the interpretation of subsequent fields. AIS messages have different payload formats depending on message type. RDS has groups with different block meanings. Understanding these layers is what turns “bits” into actual information.

There is also a trade-off between detection sensitivity and false positives. If you accept any frame that resembles a preamble, you will decode garbage. If you require strict CRC and strong thresholds, you may miss weak signals. The art is to tune thresholds so that you get usable data without flooding yourself with errors. Many practical decoders implement a two-stage check: a loose preamble detector followed by a strict CRC or parity check.

Think of CRC and FEC as complementary. CRC is fast and catches most errors; FEC is slower but can correct them. When SNR is high, CRC alone is enough. When SNR is low, FEC is what saves the frame.

Deep Dive into the Concept

Preamble and sync: A preamble is a known pattern that lets the receiver align to a frame boundary. Detection is typically done by correlation. The threshold must be set based on noise statistics. Too low and you get false frames; too high and you miss real frames. Most SDR decoders start with preamble detection, then switch to a locked tracking mode once sync is achieved.

CRC: Cyclic redundancy checks detect errors by dividing the message by a polynomial and appending the remainder. At the receiver, if the remainder is non-zero, the message is invalid. CRC does not correct errors, it only detects them. It is a powerful filter against false detections in noisy SDR captures.

BCH codes: BCH codes can correct a small number of bit errors. POCSAG uses BCH(31,21) with an additional parity bit. That means each 32-bit codeword contains 21 information bits plus error correction. The receiver computes a syndrome and corrects up to two errors per codeword.

Convolutional codes and Viterbi: GSM uses convolutional coding with interleaving. Bits are encoded with memory, and the Viterbi algorithm finds the most likely original sequence. This is computationally heavier but provides strong error protection.

NRZI and bit stuffing: AIS uses NRZI encoding to ensure frequent transitions for clock recovery. HDLC framing uses bit stuffing: after five consecutive 1s, a 0 is inserted. The receiver must remove these stuffed bits to reconstruct the original data.

Spread spectrum and correlation: GPS uses direct-sequence spread spectrum. Each satellite has a unique PRN code. The receiver correlates the incoming signal with a locally generated PRN to extract the signal and achieve processing gain. This allows signals below the noise floor to be detected.

Protocol parsing: After you extract valid frames, you must parse bit fields. For ADS-B, you decode the ICAO address, type code, altitude, and position. For AIS, you decode MMSI and position. For RDS, you decode Program Service (PS) and RadioText (RT). For GPS, you decode navigation words. Parsing is not just about bytes; it is about understanding field sizes and endianness.

Another important layer is whitening and scrambling. Many protocols intentionally randomize bit patterns before transmission to avoid long runs of zeros or ones (which can break clock recovery). The receiver must apply the inverse operation. GSM uses burst structures and interleaving that effectively scramble bits across time. RDS uses differential encoding to avoid phase ambiguities. If you skip these steps, your decoded bits will look random even if your demodulator is correct.

Error correction is often iterative. In a practical SDR receiver, you might apply soft-decision decoding for convolutional codes, then verify with CRC. If the CRC fails, you might try a different timing phase or adjust your threshold. This is how professional receivers achieve high reliability in marginal conditions. For learning projects, you can keep it simpler, but you should still understand that decoding is probabilistic, not absolute.

CRC details matter. A CRC is defined by a polynomial; different protocols use different polynomials and initial values. A CRC-24 for ADS-B is not the same as a CRC-24 used elsewhere. If you use the wrong polynomial, you will reject valid frames. When you implement CRC, validate it against known example frames from public datasets before applying it to live RF data.

BCH decoding can be implemented with precomputed syndrome tables for small codes (like POCSAG). For larger codes, algorithms like Berlekamp-Massey and Chien search are used to find error locations. You do not need to implement these from scratch in the early projects, but understanding the flow helps you debug when correction fails: first compute syndrome, then locate errors, then correct bits, then verify parity.

Viterbi decoding relies on soft decision metrics, which means you pass a confidence value for each bit rather than a hard 0/1. The quality of these metrics depends on your noise estimate and scaling. If your soft decisions are too aggressive or too weak, Viterbi performs poorly. Many GSM decoders fail not because the Viterbi algorithm is wrong, but because the soft input scaling is wrong.

Sanity checks help you catch framing errors early. For example, ADS-B ICAO addresses should be hexadecimal and aircraft altitude should be within realistic bounds. AIS MMSI values have fixed digit lengths and message types that must match payload sizes. If decoded fields are nonsensical, the problem is usually bit alignment or deinterleaving, not RF quality. Build these sanity checks into your decoder; they provide immediate feedback when you are debugging.

When available, compare your decoded frames against public datasets or other decoders. Agreement across tools is a strong validation signal.

The key is to build a reliable pipeline: detect -> validate -> decode. Debugging is easiest if you validate each stage separately.

Error detection and correction are probabilistic. A CRC with polynomial length 24 can detect almost all random errors, but not all. BCH codes can correct only a limited number of errors; beyond that, they fail silently or output wrong bits. Convolutional codes paired with Viterbi decoding can produce soft decisions that are more robust, but they require good noise variance estimates to weight metrics correctly. In practice, if you see a sudden spike in CRC failures, it usually means your SNR dropped or your timing loop slipped, not that the CRC is wrong.

Interleaving and deinterleaving are often overlooked but critical. Interleaving spreads burst errors across time so that FEC can correct them. The receiver must reverse this process. A single off-by-one error in deinterleaving can make all frames fail CRC. When debugging, test deinterleaving using known test vectors before hooking it into live SDR data.

Protocol parsing is also a source of subtle bugs. Many fields are not byte-aligned and use bit-level packing. Endianness can invert meanings. A good practice is to write a bit extractor function and test it against known example frames. If you decode ADS-B altitude but it makes no sense, the problem is often bit alignment rather than RF quality.

Definitions & key terms

  • Preamble: Known sequence marking frame start.
  • CRC: Error detection code.
  • BCH: Error-correcting code.
  • Interleaving: Spreads errors over time to make correction easier.
  • NRZI: Encoding that represents bits via transitions.

Mental model diagram

Bits -> Preamble detect -> Frame sync -> Error check -> Decode fields -> App data

How it works (step-by-step)

  1. Detect preamble or sync word via correlation.
  2. Extract a fixed number of bits for the frame.
  3. Apply de-whitening, de-stuffing, or decoding.
  4. Validate with CRC or BCH.
  5. Parse fields into human-readable values.

Minimal concrete example

# CRC-24 (ADS-B) example skeleton
POLY = 0xFFF409

def crc24(msg_bits):
    reg = 0
    for b in msg_bits:
        reg = ((reg << 1) | b) & 0xFFFFFF
        if reg & 0x1000000:
            reg ^= POLY
    return reg & 0xFFFFFF

Common misconceptions

  • “CRC fixes errors” -> CRC detects errors but does not correct them.
  • “Once bits are correct, parsing is easy” -> Field alignment is subtle.
  • “FEC is optional” -> Without it, many weak signals are undecodable.

Check-your-understanding questions

  1. Why is preamble detection used before decoding?
  2. How does BCH differ from CRC?
  3. Why does GPS need correlation instead of simple detection?

Check-your-understanding answers

  1. It provides alignment to the frame boundary.
  2. BCH can correct a limited number of errors; CRC only detects.
  3. GPS signals are below the noise floor and require processing gain.

Real-world applications

  • Aircraft, ship, and satellite tracking
  • Digital broadcast metadata (RDS/RBDS)
  • Paging and emergency alerts

How this fits on projects

Projects 4-10 all end here. You can demodulate perfectly and still fail if your framing, CRC, or FEC is wrong. These concepts turn raw bits into trustworthy data.

Where you’ll apply it

Projects 4-10.

References

  • “Digital Communications” (Proakis)
  • “Software-Defined Radio for Engineers” (Collins et al.)

Key insight: Real SDR decoding succeeds or fails at the framing and error-control layer, not at the FFT.

Summary

You now understand how raw bits become structured messages, and how error control protects decoding.

Homework / Exercises

  1. Implement a simple preamble detector using correlation.
  2. Implement a BCH(31,21) syndrome calculator.
  3. Decode a known ADS-B frame from a hex string and verify CRC.

Solutions

  1. Correlate against the known preamble and threshold on magnitude.
  2. Compute syndromes with the BCH generator polynomial.
  3. The CRC should match the frame parity bits.

Glossary

  • ADC: Analog-to-digital converter; samples voltage into numbers.
  • AGC: Automatic gain control; stabilizes amplitude.
  • CFO: Carrier frequency offset between transmitter and receiver.
  • CIC: Efficient decimator used in hardware.
  • C/N0: Carrier-to-noise density, used in GNSS tracking loops.
  • DC offset: Constant bias in I/Q samples; shows as spike at 0 Hz.
  • DF: Downlink format in ADS-B frames.
  • dBFS: Decibels relative to ADC full scale.
  • DLL: Delay-locked loop for timing recovery.
  • Doppler: Frequency shift due to relative motion.
  • IF: Intermediate frequency after mixing.
  • GMSK: Gaussian Minimum Shift Keying; constant envelope.
  • IQ imbalance: Mismatch between I and Q gain/phase.
  • LO: Local oscillator used for mixing.
  • Matched filter: Filter that maximizes SNR for a known symbol shape.
  • NRZI: Non-return-to-zero inverted encoding.
  • Preamble: Known bit pattern used for frame alignment.
  • PPM: Pulse-position modulation.
  • PRN: Pseudorandom noise code (GPS).
  • RBW: Resolution bandwidth in FFT.
  • SNR: Signal-to-noise ratio.
  • Symbol timing: The receiver’s estimate of symbol boundaries.

Why SDR Matters

SDR matters because it turns radio from fixed hardware into software-defined infrastructure. It enables rapid prototyping, spectrum monitoring, and decoding of public safety, navigation, and broadcast systems without custom RF hardware for each protocol.

Real-world impact (with sources and dates):

  • ADS-B Out is mandatory in U.S. controlled airspace as of January 1, 2020, making ADS-B a foundational aviation safety system. (14 CFR 91.225; FAA ADS-B Out)
  • AIS uses the VHF channels 161.975 MHz and 162.025 MHz, the global safety channels for maritime tracking and collision avoidance. (USCG Navigation Center)
  • RDS/RBDS uses a 57 kHz subcarrier with a 1,187.5 bps data stream, enabling station metadata from FM broadcasts with low-cost SDRs. (NRSC RBDS standard)
  • NOAA APT images scan at 120 lines per minute (2 lines per second) with a 2400 Hz subcarrier, so SDR can capture full weather images with modest equipment. (NOAA KLM User’s Guide)
  • GPS L1 C/A uses 1575.42 MHz with a 1.023 Mcps code and 50 bps navigation data, illustrating how SDR can recover spread-spectrum signals below the noise floor. (IS-GPS-200)

Context & Evolution (History)

Radio moved from analog circuits to software because ADCs became fast enough and CPUs became cheap enough. Early SDRs were expensive military systems. Today, low-cost USB receivers enable hobbyist-level decoding of signals that once required specialized equipment. This democratization has accelerated innovation in aviation, maritime safety, satellite tracking, and emergency communications.

OLD APPROACH                        NEW APPROACH
┌───────────────────────┐           ┌───────────────────────┐
│ Fixed hardware radio  │           │ Software + SDR front  │
│ One modulation only   │           │ Any modulation via DSP│
│ Limited visibility    │           │ Full-spectrum insight │
└───────────────────────┘           └───────────────────────┘

Concept Summary Table

Concept Cluster What You Need to Internalize
RF Front-End + Sampling How RF energy becomes IQ samples and how front-end impairments appear.
Spectrum + Noise How FFTs, windows, and noise floor shape what you see.
Filtering + Resampling How to isolate channels and avoid aliasing.
Modulation + Demodulation How information is encoded in amplitude/phase/frequency.
Synchronization How to recover timing, carrier, and phase for digital decoding.
Framing + Error Control How bits become messages and how CRC/FEC protect data.

Project-to-Concept Map

Project What It Builds Primer Chapters It Uses
Project 1: Spectrum Eye Spectrum analyzer + IQ pipeline 1, 2
Project 2: AM Demodulator Envelope detection 1, 3, 4
Project 3: FM Receiver FM discriminator + de-emphasis 1, 3, 4
Project 4: ADS-B Decoder Preamble/CRC and PPM 1, 2, 6
Project 5: RDS/RBDS BPSK + timing recovery 2, 3, 4, 5, 6
Project 6: AIS GMSK + NRZI + framing 2, 3, 4, 5, 6
Project 7: NOAA APT FM + line sync + Doppler 2, 3, 4, 5, 6
Project 8: POCSAG FSK + BCH decoding 2, 3, 4, 6
Project 9: GSM BCCH GMSK + burst sync + Viterbi 2, 3, 4, 5, 6
Project 10: GPS L1 Correlation + tracking loops 1, 2, 3, 5, 6
Project 11: Capstone SDR Console Integrated receiver + decoders 1, 2, 3, 4, 5, 6

Deep Dive Reading by Concept

RF Front-End + Sampling

Concept Book & Chapter Why This Matters
Sampling and IQ “Software-Defined Radio for Engineers” (Collins et al.) - Ch. 2-3 SDR hardware fundamentals and IQ representation.
ADC + dynamic range “Understanding Digital Signal Processing” (Lyons) - Ch. 2 Quantization and noise.

Spectrum + Noise

Concept Book & Chapter Why This Matters
FFT fundamentals “Understanding Digital Signal Processing” (Lyons) - Ch. 4-5 Frequency-domain thinking.
Noise + SNR “Digital Communications” (Proakis) - Ch. 2 Noise models and detection.

Filtering + Resampling

Concept Book & Chapter Why This Matters
FIR/IIR filters “Digital Signal Processing” (Smith) - Ch. 14 Filter design practice.
Multirate DSP “Understanding Digital Signal Processing” (Lyons) - Ch. 10 Decimation and interpolation.

Modulation + Demodulation

Concept Book & Chapter Why This Matters
AM/FM/PM “Software-Defined Radio for Engineers” - Ch. 4 Analog modulation in SDR.
PSK/QAM “Digital Communications” (Proakis) - Ch. 4-5 Digital modulation theory.

Synchronization

Concept Book & Chapter Why This Matters
PLL/Costas “Digital Communications” (Proakis) - Ch. 6 Carrier recovery theory.
Timing recovery “Digital Communications” (Proakis) - Ch. 7 Symbol timing control.

Framing + Error Control

Concept Book & Chapter Why This Matters
CRC/BCH “Digital Communications” (Proakis) - Ch. 8 Error control coding.
Protocol framing “Computer Networks” (Tanenbaum) - Ch. 2 Framing patterns.

Quick Start

Day 1 (4 hours):

  1. Read Chapter 1 (RF front-end) and Chapter 2 (Spectrum).
  2. Capture 2 seconds of IQ data with rtl_sdr.
  3. Run Project 1 and get a waterfall. Ignore artifacts for now.
  4. Identify at least one strong FM station and verify frequency.

Day 2 (4 hours):

  1. Implement Project 2 (AM demod) on a recorded file.
  2. Plot the envelope and listen to audio.
  3. Adjust gain and re-capture to improve SNR.
  4. Read Project 2’s “Core Question” and answers.

End of weekend: You can capture RF, display spectrum, and decode basic AM. That is 80% of the SDR mental model.


Best for: Learners who want to master DSP fundamentals.

  1. Project 1 (Spectrum Eye)
  2. Project 2 (AM Demod)
  3. Project 3 (FM Receiver)
  4. Project 5 (RDS)
  5. Project 8 (POCSAG)
  6. Project 10 (GPS L1)

Path 2: The Signal Hunter

Best for: Learners focused on real-world signals.

  1. Project 1
  2. Project 4 (ADS-B)
  3. Project 6 (AIS)
  4. Project 7 (NOAA APT)
  5. Project 10 (GPS)

Path 3: The Mobile Networks Learner

Best for: Learners interested in cellular systems.

  1. Project 1
  2. Project 3
  3. Project 9 (GSM BCCH)
  4. Project 5 (RDS) for BPSK practice

Path 4: The Completionist

Phase 1 (Weeks 1-2): Projects 1-3 Phase 2 (Weeks 3-5): Projects 4-6 Phase 3 (Weeks 6-8): Projects 7-9 Phase 4 (Weeks 9-10): Project 10 + Capstone


Success Metrics

  • You can identify and annotate at least 10 signals in a live spectrum.
  • You can demodulate AM and FM with clean audio.
  • You can decode ADS-B, AIS, and RDS in live captures.
  • You can generate at least one full NOAA APT image.
  • You can acquire a GPS PRN and report Doppler/code phase.
  • You can explain the trade-offs between receiver architectures.
  • You can run the capstone console with at least two decoders simultaneously.

Optional Appendices

Appendix A: Debugging Checklist

  • Is the sample format correct (signed/unsigned, IQ order)?
  • Is the signal centered in the passband after tuning?
  • Is gain too high (clipping) or too low (noise floor)?
  • Are you filtering before decimation?
  • Are you compensating for CFO/PPM?
  • Are you validating frames with CRC/BCH?

Appendix B: Useful Tools

  • rtl_sdr for raw captures
  • rtl_fm for quick FM sanity checks
  • sox or ffmpeg for audio analysis
  • gqrx or SDR++ for visual validation
  • Only decode signals that are legal to receive in your jurisdiction.
  • Never transmit without proper authorization.
  • Avoid decoding private or encrypted communications.

Appendix D: Signal Parameter Cheat Sheet (Project Signals)

Signal/Protocol Center/Carrier Modulation Symbol/Data Rate Notes
ADS-B (1090ES) 1090 MHz PPM 1 Mbps (1 us chips) 8 us preamble, 112-bit message
AIS 161.975/162.025 MHz GMSK 9600 bps 25 kHz channels (A/B)
RDS/RBDS 57 kHz subcarrier Differential BPSK 1187.5 bps 57 kHz = 3 x 19 kHz pilot
NOAA APT 137 MHz band FM + 2400 Hz subcarrier 120 lines/min Analog image; 2 lines/sec
POCSAG VHF/UHF varies 2-FSK 512/1200/2400 bps BCH(31,21) error correction
GSM BCCH 900/1800 MHz bands GMSK 270.833 ksym/s 200 kHz channels
GPS L1 C/A 1575.42 MHz BPSK(1) 1.023 Mcps code, 50 bps nav Spread-spectrum PRN

See Appendix E for standards and source references.

Appendix E: Standards and Protocol References (Web)

  • ITU-R SDR definition (WRC-12): https://www.itu.int/net/ITU-R/index.asp?category=information&rlink=sdr&lang=en
  • FCC SDR definition (47 CFR 2.1): https://www.law.cornell.edu/cfr/text/47/2.1
  • FAA ADS-B Out mandate: https://www.faa.gov/air_traffic/technology/adsb/adsb_out
  • 14 CFR 91.225 (ADS-B Out): https://www.law.cornell.edu/cfr/text/14/91.225
  • ADS-B waveform details (1090ES): https://www.mathworks.com/help/comm/ug/ads-b-waveform-generation-and-reception.html
  • ADS-B Mode S background: https://mode-s.org/1090mhz/content/ads-b/1-basics.html
  • AIS channel allocations (USCG NAVCEN): https://www.navcen.uscg.gov/international-vhf-marine-radio-channels-freq
  • AIS physical layer summary (ITU-R M.1371 reference): https://www.docest.com/doc/28159/recommendation-itu-r-m-1371-4
  • NRSC RBDS/RDS summary (57 kHz, 1187.5 bps): https://www.nrscstandards.org/news/rbds-1998.asp
  • NOAA KLM User’s Guide (APT signal characteristics): https://www.ncei.noaa.gov/data/klmuserguide/klm/html/c4/sec4-2.htm
  • GPS ICD (IS-GPS-200N): https://www.gps.gov/technical/icwg/IS-GPS-200N.pdf
  • 3GPP TS 45.004 (GSM modulation): https://3gpp-explorer.com/spec/45004/
  • 3GPP TS 45.002 (GSM channel spacing): https://3gpp-explorer.com/spec/45002/
  • POCSAG overview: https://en.wikipedia.org/wiki/POCSAG
  • FCC FM broadcast pre-emphasis (75 us): https://www.law.cornell.edu/cfr/text/47/73.317
  • ITU-R BS.450 (50 us pre-emphasis summary): https://studylib.net/doc/18737170/itu-r-b-series-broadcasting-service–sound-

Project Overview Table

# Project Signal/Protocol Key Output Difficulty
1 Spectrum Eye Wideband IQ Waterfall + peak list Beginner+
2 The Envelope (AM) AM broadcast Clean audio WAV Intermediate
3 Freq-to-Volts (FM) FM broadcast Live audio + de-emphasis Advanced
4 Aircraft Radar (ADS-B) 1090ES Live aircraft messages Expert
5 Hidden Text (RDS/RBDS) FM subcarrier Station text + group decode Advanced
6 Ship Tracker (AIS) VHF AIS MMSI + position stream Expert
7 Satellite Eye (NOAA APT) 137 MHz Weather image Advanced
8 Beeping Relic (POCSAG) Paging Decoded pager messages Advanced
9 Mobile Whisper (GSM BCCH) GSM Cell ID + system info Master
10 Space Compass (GPS L1) GNSS PRN acquisition + tracking Master
11 Universal SDR Console (Capstone) Multi Unified SDR workstation Master+

Project List

Below are 10 deep projects plus a capstone. Each project includes exact outcomes, design questions, pitfalls, and validation criteria.


Project 1: Spectrum Eye (IQ Waterfall Analyzer)

  • Main Programming Language: Python
  • Alternative Programming Languages: C, Rust
  • Coolness Level: Level 3 - The RF Microscope
  • Business Potential: 6/10 (spectrum monitoring tools)
  • Difficulty: Level 2 (Beginner+)
  • Knowledge Area: DSP + Visualization
  • Software or Tool: numpy, matplotlib, pyqtgraph (optional)
  • Main Book: “Understanding Digital Signal Processing” (Lyons)

What you’ll build: A real-time spectrum analyzer that reads IQ samples from a file or live SDR, computes FFTs, and displays a waterfall with frequency labels and dB scaling.

Why it teaches SDR: This project forces you to handle IQ parsing, FFT scaling, windowing, and dB visualization. It is the fastest way to build intuition about the RF environment.

Core challenges you’ll face:

  • Choosing FFT size vs update rate
  • Handling IQ format correctly
  • Displaying and scaling dBFS without confusing noise and signal

Real World Outcome

What you will see:

  1. A scrolling waterfall plot with frequency on the x-axis and time on the y-axis.
  2. A live power spectrum with labeled peak frequencies.
  3. A visible DC spike and mirrored images you can diagnose.

Command Line Outcome Example:

# Capture 2 seconds of IQ at 2.4 MSPS
$ rtl_sdr -f 100.1M -s 2.4M -g 30 capture.iq

# Run your analyzer
$ python spectrum_eye.py --input capture.iq --fs 2.4M
[INFO] FFT size: 4096
[INFO] RBW: 585.9 Hz
[INFO] Peak @ +200 kHz: -18 dBFS
[INFO] Peak @ -200 kHz: -18 dBFS (mirror)

# Live mode with markers
$ python spectrum_eye.py --live --device rtlsdr --center 162.025M --fs 2.4M --gain 32
[INFO] Waterfall FPS: 25
[INFO] Marker: 162.025000 MHz | BW: 12.5 kHz | Peak: -42 dBFS
[INFO] CSV log: spectrum_2026-01-01.csv

The Core Question You’re Answering

“How do I turn raw IQ samples into a truthful picture of the RF world?”

This question matters because all SDR work is grounded in spectrum visibility. If your analyzer is wrong, every decoder built on top of it will be wrong.

Concepts You Must Understand First

  1. FFT and RBW
    • How does FFT size affect resolution and latency?
    • What does a frequency bin mean physically?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 4-5
  2. Windowing
    • What is spectral leakage?
    • Why does Hann window reduce leakage?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 3
  3. dBFS Scaling
    • What is full scale?
    • Why is noise floor relative?
    • Book Reference: “Digital Signal Processing” (Smith) - Ch. 2
  4. IQ sample formats
    • Is your IQ unsigned or signed?
    • How do you scale to floating-point safely?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 2

Questions to Guide Your Design

  1. FFT Design
    • What FFT size balances resolution and update rate?
    • How often should you update the waterfall?
  2. Signal Mapping
    • How do you label frequencies relative to center?
    • How do you map FFT bins to Hz?
  3. Visualization
    • How will you map dB values to color?
    • Will you clip or normalize the color scale?
    • How will you label RBW and bin width for users?

Thinking Exercise

The Leakage Problem

Suppose a pure tone is 100.2 kHz above center. Your FFT bin spacing is 1 kHz. Where does the energy appear? Sketch how leakage spreads across bins when the tone is not exactly at a bin center.

The Interview Questions They’ll Ask

  1. Why do we apply a window before FFT?
  2. How does FFT size affect RBW?
  3. Why does the noise floor shift when you change sample rate?
  4. What causes a mirrored spectrum in IQ data?
  5. What is the difference between dBFS and dBm?

Hints in Layers

Hint 1: Start with IQ parsing

raw = np.fromfile(path, dtype=np.uint8)
I = raw[0::2] - 127.5
Q = raw[1::2] - 127.5
samples = (I + 1j*Q) / 127.5

Hint 2: Apply window and FFT

window = np.hanning(nfft)
X = np.fft.fftshift(np.fft.fft(samples[:nfft] * window))

Hint 3: Convert to dBFS

mag = 20*np.log10(np.abs(X) + 1e-12)

Hint 4: Verify frequency mapping

# Inject a known tone using an FM station
# You should see a peak at the correct offset.

Books That Will Help

Topic Book Chapter
FFT “Understanding Digital Signal Processing” (Lyons) Ch. 4-5
Windowing “Understanding Digital Signal Processing” (Lyons) Ch. 3
Visualization “Digital Signal Processing” (Smith) Ch. 2

Common Pitfalls & Debugging

Problem: Waterfall shows a huge spike at center

  • Why: DC offset / LO leakage
  • Fix: Subtract mean or apply a high-pass filter
  • Quick test: Plot mean of I and Q; should be near zero

Problem: Mirrored spectrum

  • Why: IQ swapped or mis-scaled
  • Fix: Swap I/Q or invert Q

Problem: Spectrum looks inverted or shifted

  • Why: Wrong sample format (signed vs unsigned) or wrong center frequency
  • Fix: Verify IQ byte format and center frequency metadata
  • Quick test: Capture a strong FM station and verify its offset

Problem: Spectrum looks noisy with no clear peaks

  • Why: Gain too low or wrong center frequency
  • Fix: Increase gain and verify tuning

Definition of Done

  • Waterfall updates at least 10 FPS
  • DC spike is handled or minimized
  • Strong FM station appears at correct frequency offset
  • Can load a recorded IQ file and visualize it

Project 2: The Envelope (AM Demodulator)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 3 - First Voice From RF
  • Business Potential: 4/10 (education tools)
  • Difficulty: Level 3 (Intermediate)
  • Knowledge Area: AM demodulation, filtering, decimation
  • Software or Tool: numpy, scipy
  • Main Book: “Understanding Digital Signal Processing” (Lyons)

What you’ll build: An AM demodulator that reads IQ, shifts the carrier to baseband, extracts the envelope, filters audio, and writes a WAV file.

Why it teaches SDR: AM is the simplest demodulation and teaches envelope detection, filtering, and decimation.

Core challenges you’ll face:

  • Proper baseband shifting
  • Choosing audio bandwidth
  • Normalizing audio output

Real World Outcome

You will decode AM broadcast audio from an IQ file.

$ python am_demod.py --input am_740kHz.iq --fs 2.4M --freq 740k
[INFO] Shifted carrier to 0 Hz
[INFO] Low-pass filter: 6 kHz
[INFO] Decimating to 48 kHz
[INFO] Writing output.wav

$ sox --i output.wav
Channels       : 1
Sample Rate    : 48000
Duration       : 00:00:10.00
Bit Precision  : 16

The Core Question You’re Answering

“How do you remove the RF carrier and keep only the audio hidden in its amplitude?”

Concepts You Must Understand First

  1. Envelope detection
    • Why does magnitude recover AM audio?
    • What is the effect of carrier offset on the envelope?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 10
  2. Low-pass filtering
    • How to choose cutoff for speech vs music
    • What happens if the cutoff is too high or too low?
    • Book Reference: “Digital Signal Processing” (Smith) - Ch. 14
  3. Decimation
    • Why filter before downsampling
    • How many samples-per-symbol do you want at 48 kHz?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 10
  4. Frequency shifting
    • How do you compute the complex oscillator for a given offset?
    • How do you keep phase continuous across blocks?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 3

Questions to Guide Your Design

  • What cutoff frequency preserves intelligible speech?
  • How many decimation stages avoid aliasing?
  • How will you normalize audio to avoid clipping?

Thinking Exercise

Draw an AM carrier with a slow-varying envelope. What does the magnitude look like?

The Interview Questions They’ll Ask

  1. Why is AM sensitive to noise?
  2. How does decimation reduce computation?
  3. Why must you filter before downsampling?
  4. What causes a “whine” in demodulated audio?

Hints in Layers

Hint 1: Mix the station to baseband with exp(-j*2*pi*f*t).

lo = np.exp(-1j * 2*np.pi*freq_offset * t)
bb = samples * lo

Hint 2: Low-pass filter to ~5-6 kHz.

Hint 3: Take magnitude and remove DC.

audio = np.abs(bb)
audio = audio - np.mean(audio)

Hint 4: Decimate to 48 kHz and output WAV.

Books That Will Help

Topic Book Chapter
AM Theory “Understanding Digital Signal Processing” (Lyons) Ch. 10
FIR Filters “Digital Signal Processing” (Smith) Ch. 14

Common Pitfalls & Debugging

Problem: Audio is distorted or clipped

  • Why: No normalization
  • Fix: Scale to [-1, 1] before writing WAV

Problem: High-pitched tone in audio

  • Why: DC offset or poor filtering
  • Fix: High-pass filter below 100 Hz

Problem: Audio is weak or muffled

  • Why: Wrong center frequency or too narrow filter
  • Fix: Re-tune center frequency and widen cutoff

Definition of Done

  • Clear AM audio playable from a WAV file
  • No audible aliasing artifacts
  • Can tune at least two AM stations

Project 3: Freq-to-Volts (FM Broadcast Receiver)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 4 - Real FM Stereo Feel
  • Business Potential: 5/10
  • Difficulty: Level 4 (Advanced)
  • Knowledge Area: FM demodulation, de-emphasis
  • Software or Tool: numpy, scipy, sounddevice
  • Main Book: “Understanding Digital Signal Processing” (Lyons)

What you’ll build: An FM receiver that demodulates audio and applies correct de-emphasis (50 us or 75 us).

Why it teaches SDR: FM requires phase-based demodulation and proper filtering. It is the first project where phase matters deeply.

Core challenges you’ll face:

  • Demod method choice (atan2 vs conjugate product)
  • Correct de-emphasis filtering
  • Audio scaling without clipping

Real World Outcome

$ python fm_receiver.py --freq 101.1M --fs 2.4M --gain 40
[INFO] FM demod: conjugate product
[INFO] De-emphasis: 75 us
[INFO] Audio rate: 48 kHz
[INFO] Playing audio...

# Save to WAV instead of live playback
$ python fm_receiver.py --freq 99.5M --fs 2.4M --gain 35 --out fm.wav
[INFO] Pilot tone detected: 19 kHz
[INFO] Stereo not decoded (mono output)
[INFO] Wrote fm.wav (10 s)

The Core Question You’re Answering

“How do you translate phase rotation speed into audio voltage?”

Concepts You Must Understand First

  1. Phase differentiation (FM discriminator)
    • How does phase change map to instantaneous frequency?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 13
  2. De-emphasis filtering
    • Why do broadcast systems apply pre-emphasis?
    • How do 75 us (US) and 50 us (EU) affect tonal balance?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 4
  3. Wideband filtering (200 kHz)
    • How narrow can you filter without damaging the audio?
    • Book Reference: “Digital Signal Processing” (Smith) - Ch. 14
  4. FM baseband multiplex
    • What is the 19 kHz pilot used for?
    • Where does the 38 kHz stereo subcarrier live?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 4

Questions to Guide Your Design

  • How will you prevent audio clipping after demodulation?
  • Which demod method is faster: atan2 or conjugate product?
  • Where do you apply de-emphasis in the chain?

Thinking Exercise

Simulate a rotating vector. What does faster rotation look like in time?

The Interview Questions They’ll Ask

  1. What is the capture effect in FM?
  2. Why is FM more noise-resistant than AM?
  3. What is pre-emphasis/de-emphasis?
  4. How does frequency deviation affect audio amplitude?

Hints in Layers

  1. Filter to ~200 kHz around the station.
  2. Use conjugate product for fast demodulation.
    y = samples[1:] * np.conj(samples[:-1])
    fm = np.angle(y)
    
  3. Apply low-pass for 15 kHz audio and de-emphasis.
  4. Resample to 48 kHz.

Books That Will Help

Topic Book Chapter
FM Demod “Understanding Digital Signal Processing” (Lyons) Ch. 13
Filter Design “Digital Signal Processing” (Smith) Ch. 14

Common Pitfalls & Debugging

Problem: Tinny audio

  • Why: Missing de-emphasis
  • Fix: Apply 75 us (US) or 50 us (EU) de-emphasis

Problem: Distortion on peaks

  • Why: Clipping or wrong scaling
  • Fix: Normalize output

Problem: Audio sounds muffled

  • Why: Cutoff too low or heavy de-emphasis
  • Fix: Verify low-pass cutoff and de-emphasis constant

Definition of Done

  • Clean FM audio without distortion
  • De-emphasis applied correctly
  • Able to tune multiple stations

Project 4: The Aircraft Radar (ADS-B Decoder)

  • Main Programming Language: Python or C
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 5 - Live Aircraft Tracking
  • Business Potential: 7/10 (aviation data tools)
  • Difficulty: Level 5 (Expert)
  • Knowledge Area: PPM decoding, preamble detection, CRC
  • Software or Tool: numpy, bit parsing
  • Main Book: “Software-Defined Radio for Engineers”

What you’ll build: A decoder that extracts ADS-B frames from 1090 MHz and parses aircraft data.

Why it teaches SDR: ADS-B is a real high-value protocol with strict timing, PPM modulation, and CRC validation.

Core challenges you’ll face:

  • Microsecond timing alignment
  • Preamble correlation thresholds
  • CRC-24 verification and frame parsing

Real World Outcome

$ ./adsb_decoder --live
[MSG] ICAO: A12B3C | Alt: 35000 ft | Speed: 450 kt
[MSG] ICAO: C4D5E6 | Lat: 40.7128 | Lon: -74.0060 | Heading: 270

$ ./adsb_decoder --live --fs 2.4M --gain 45 --json adsb.json
[INFO] Preamble hits: 134 | Valid frames: 92 | CRC fail: 42
[INFO] Output: adsb.json (DF17, DF11, DF20)

The Core Question You’re Answering

“How do you detect a 112-bit digital message hidden in a noisy RF burst?”

Concepts You Must Understand First

  1. PPM modulation (ADS-B)
    • What is the 1 us chip timing and 112-bit frame length?
    • Why does ADS-B ignore IQ phase and use magnitude?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 6
  2. Preamble detection and correlation
    • Why is the preamble 8 us long?
    • How do you set a threshold that adapts to noise?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 6
  3. CRC-24 verification
    • What polynomial is used for ADS-B CRC-24?
    • How do you validate against known frames?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 8
  4. Timing resolution
    • How many samples per microsecond do you have at your sample rate?
    • What happens if you sample too slowly?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 3

Questions to Guide Your Design

  • How will you detect the 8 microsecond preamble?
  • What threshold distinguishes a “pulse” from noise?
  • How will you reject false positives quickly?

Thinking Exercise

Draw the ADS-B preamble on a 1 us grid. Where are the peaks?

The Interview Questions They’ll Ask

  1. Why is 2 MSPS the practical minimum for ADS-B?
  2. What is the ADS-B frame length?
  3. How does CRC help in noisy RF channels?
  4. Why does ADS-B use 1090 MHz?

Hints in Layers

  1. Convert IQ to magnitude; ADS-B ignores phase.
  2. Use a sliding correlation with the preamble pattern.
    preamble = np.array([1,0,1,0,0,0,0,1,0,1,0,0,0,0,0,0])
    score = np.dot(mag[i:i+16], preamble)
    
  3. Decode 112 bits and verify CRC-24.
  4. Only accept frames with valid CRC.

Books That Will Help

Topic Book Chapter
PPM “Software-Defined Radio for Engineers” Ch. 6
CRC “Digital Communications” (Proakis) Ch. 8

Common Pitfalls & Debugging

Problem: Too many false frames

  • Why: Low detection threshold
  • Fix: Use correlation + SNR thresholds

Problem: No frames detected

  • Why: Center frequency drift or gain too low
  • Fix: Adjust PPM correction and gain

Problem: Preamble detected but bits are garbage

  • Why: Sample rate too low or timing phase off
  • Fix: Increase sample rate or implement fractional timing adjustment

Definition of Done

  • Detect at least 5 valid ADS-B messages per minute
  • CRC validation works
  • Aircraft data parsed correctly (ICAO, altitude, position)

Project 5: The Hidden Text (RDS/RBDS Decoder)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 4 - Hidden Metadata Decoder
  • Business Potential: 5/10
  • Difficulty: Level 4 (Advanced)
  • Knowledge Area: Subcarrier extraction, BPSK, timing recovery
  • Software or Tool: numpy, scipy
  • Main Book: “Software-Defined Radio for Engineers”

What you’ll build: A decoder that extracts RDS/RBDS text from FM broadcasts.

Why it teaches SDR: It forces you to extract a weak 57 kHz subcarrier carrying a 1,187.5 bps BPSK data stream, recover phase, and decode group structures.

Core challenges you’ll face:

  • Narrowband extraction of 57 kHz subcarrier
  • Carrier recovery for BPSK
  • Symbol timing recovery

Real World Outcome

$ python rds_decoder.py --freq 99.5M
[RDS] PS: JAZZ101
[RDS] RT: NOW PLAYING - JOHN COLTRANE
[RDS] PI: 0x52AF | PTY: 24 | Groups: 0A, 2A

The Core Question You’re Answering

“How do you extract a low-bitrate data stream hidden inside an FM broadcast?”

Concepts You Must Understand First

  1. RDS subcarrier (57 kHz) and data rate (1,187.5 bps)
    • Why is the subcarrier at 57 kHz (3 x 19 kHz pilot)?
    • How does symbol rate relate to the pilot tone?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 5
  2. BPSK demodulation + Costas loop
    • What does a Costas loop correct in BPSK?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 6
  3. Differential decoding + block sync
    • Why does RDS use differential coding?
    • How do you detect group boundaries?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 7
  4. Group structure and CRC
    • What do groups 0A and 2A carry?
    • How do you validate blocks with checkwords?
    • Book Reference: “Computer Networks” (Tanenbaum) - Ch. 2

Questions to Guide Your Design

  • How will you isolate the 57 kHz subcarrier?
  • How will you recover carrier phase?
  • How will you detect RDS group boundaries?

Thinking Exercise

Sketch the FM baseband spectrum including mono, stereo, pilot, and the 57 kHz RDS subcarrier.

The Interview Questions They’ll Ask

  1. Why is the RDS subcarrier at 57 kHz?
  2. Why use differential BPSK?
  3. What is the 19 kHz pilot tone used for?
  4. How do you recover symbol timing?

Hints in Layers

  1. Use a band-pass filter centered at 57 kHz.
    bp = firwin(numtaps=401, cutoff=[56e3, 58e3], pass_zero=False, fs=fs)
    rds = lfilter(bp, 1.0, fm_baseband)
    
  2. Square the signal to recover carrier at 114 kHz, then divide by 2.
  3. Use a Costas loop for phase tracking.
  4. Parse blocks and check syndromes.

Books That Will Help

Topic Book Chapter
BPSK “Digital Communications” (Proakis) Ch. 4-6
PLL “Digital Communications” (Proakis) Ch. 6

Common Pitfalls & Debugging

Problem: Garbled text

  • Why: Timing or phase error
  • Fix: Improve timing recovery and Costas loop stability

Problem: No RDS detected

  • Why: Station may not transmit RDS
  • Fix: Try another FM station

Problem: PS text flickers or changes randomly

  • Why: Block sync errors or bad CRC
  • Fix: Enforce CRC validation and majority vote across groups

Definition of Done

  • Decode program service (PS) text
  • Decode radio text (RT)
  • Stable decoding for at least 5 minutes

Project 6: The Ship Tracker (AIS Decoder)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 5 - Live Maritime Tracking
  • Business Potential: 7/10 (maritime analytics)
  • Difficulty: Level 5 (Expert)
  • Knowledge Area: GMSK, NRZI, HDLC framing
  • Software or Tool: numpy
  • Main Book: “Software-Defined Radio for Engineers”

What you’ll build: A decoder that extracts AIS messages and displays ship positions.

Why it teaches SDR: AIS requires GMSK demod, NRZI decoding, bit stuffing, and CRC validation on the 161.975 MHz and 162.025 MHz VHF channels.

Core challenges you’ll face:

  • GMSK demodulation in noisy VHF channels
  • Symbol timing recovery
  • HDLC framing and bit un-stuffing

Real World Outcome

$ python ais_decoder.py --freq 162.025M
[AIS] MMSI: 235123456 | Name: EVER GIVEN | Lat: 51.50 | Lon: 0.10
[AIS] Type: 1 | SOG: 12.3 kn | COG: 180.0 | Heading: 182

The Core Question You’re Answering

“How do you decode GMSK bursts and recover framed messages from ships?”

Concepts You Must Understand First

  1. AIS frequencies and channels (161.975 MHz and 162.025 MHz)
    • Why does AIS use two channels (A/B)?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 8
  2. GMSK demodulation and BT shaping
    • What does the Gaussian filter (BT) do to spectral splatter?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 5
  3. NRZI and bit stuffing
    • How does NRZI reduce long runs of zeros?
    • Why does HDLC use bit stuffing?
    • Book Reference: “Computer Networks” (Tanenbaum) - Ch. 2
  4. HDLC framing + CRC
    • Where is the 0x7E flag used?
    • How do you verify CRC on AIS payloads?
    • Book Reference: “Computer Networks” (Tanenbaum) - Ch. 2

Questions to Guide Your Design

  • How will you detect channel A vs B?
  • How do you recover clock from a GMSK signal?
  • How will you un-stuff bits safely?

Thinking Exercise

Write out an HDLC flag and show how bit stuffing prevents false flags.

The Interview Questions They’ll Ask

  1. Why does AIS use two channels?
  2. What does NRZI buy you?
  3. Why does GMSK reduce spectral splatter?
  4. What is an MMSI?

Hints in Layers

  1. Use an FM discriminator to get a baseband NRZ signal.
    nrzi = (np.diff(nrz) != 0).astype(np.uint8)
    
  2. Filter to 9.6 kHz and recover clock from zero crossings.
  3. Detect HDLC flag 0x7E and un-stuff bits.
  4. Parse AIS payload and verify CRC.

Books That Will Help

Topic Book Chapter
GMSK “Digital Communications” (Proakis) Ch. 5
Framing “Computer Networks” (Tanenbaum) Ch. 2

Common Pitfalls & Debugging

Problem: Frames detected but CRC fails

  • Why: Timing recovery errors
  • Fix: Improve symbol timing or sampling phase

Problem: No AIS signals

  • Why: Antenna placement or frequency drift
  • Fix: Use a VHF antenna and correct PPM

Problem: Position fields look absurd

  • Why: Wrong bit alignment or endianness
  • Fix: Validate HDLC framing and bit extraction offsets

Definition of Done

  • Decode at least one AIS message with valid CRC
  • Parse MMSI, position, speed
  • Display results on a map

Project 7: The Satellite Eye (NOAA APT Image Decoder)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 5 - Weather Images from Space
  • Business Potential: 6/10 (satellite imaging tools)
  • Difficulty: Level 4 (Advanced)
  • Knowledge Area: FM demod, sync pulses, Doppler
  • Software or Tool: numpy, PIL
  • Main Book: “Software-Defined Radio for Engineers”

What you’ll build: A decoder that produces grayscale APT images from NOAA satellites.

Why it teaches SDR: It requires FM demod, subcarrier extraction, and precise timing alignment.

Core challenges you’ll face:

  • Doppler compensation
  • Sync pulse detection
  • Line alignment and resampling

Real World Outcome

$ python noaa_apt.py --freq 137.620M --fs 2.4M
[INFO] Doppler correction enabled
[INFO] Sync pulses detected
[INFO] Image saved: noaa15_2025-12-31.png
[INFO] Image size: 2080 x 1200

The Core Question You’re Answering

“How do you turn a noisy FM subcarrier into a line-scanned image?”

Concepts You Must Understand First

  1. APT modulation and subcarrier (2400 Hz)
    • Why is APT an analog image format?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 7
  2. FM demodulation
    • How does FM demod output map to grayscale?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 13
  3. Doppler shift tracking
    • How large is Doppler during a LEO pass?
    • Where should you apply Doppler correction?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 3
  4. Line sync detection
    • What does a sync pulse look like in time?
    • How do you detect line boundaries reliably?
    • Book Reference: “Digital Signal Processing” (Smith) - Ch. 23

Questions to Guide Your Design

  • How will you correct for Doppler during a pass?
  • How do you detect sync pulses for line alignment?
  • What sample rate is needed for a clean image?

Thinking Exercise

Draw the APT spectrum: FM carrier with AM-modulated 2400 Hz subcarrier and 120 lines per minute scan structure.

The Interview Questions They’ll Ask

  1. Why is Doppler significant for LEO satellites?
  2. How do you detect line sync in APT?
  3. Why is APT considered “analog”?
  4. What limits the image resolution?

Hints in Layers

  1. FM demodulate to baseband audio.
  2. Extract the 2400 Hz subcarrier envelope.
    env = np.abs(hilbert(audio))
    
  3. Detect sync pulses to align scan lines.
  4. Resample to a fixed line length.

Books That Will Help

Topic Book Chapter
FM Demod “Understanding Digital Signal Processing” (Lyons) Ch. 13
Signal processing “Digital Signal Processing” (Smith) Ch. 23

Common Pitfalls & Debugging

Problem: Slanted or skewed image

  • Why: Sampling clock mismatch
  • Fix: Resample to correct line rate

Problem: Faint image

  • Why: Low SNR or weak pass
  • Fix: Improve antenna or use higher elevation passes

Problem: Lines are scrambled or repeated

  • Why: Sync pulse detector thresholds are wrong
  • Fix: Adjust threshold and enforce minimum line spacing

Definition of Done

  • Decode at least one full NOAA APT image
  • Sync pulses correctly align lines
  • Image has recognizable land/sea features

Project 8: The Beeping Relic (POCSAG Pager Decoder)

  • Main Programming Language: Python
  • Alternative Programming Languages: C
  • Coolness Level: Level 4 - Legacy Systems Live
  • Business Potential: 4/10
  • Difficulty: Level 4 (Advanced)
  • Knowledge Area: FSK demod, BCH decoding
  • Software or Tool: numpy
  • Main Book: “Digital Communications” (Proakis)

What you’ll build: A decoder that extracts and prints POCSAG messages.

Why it teaches SDR: It introduces multi-rate FSK, sync word detection, and BCH error correction.

Core challenges you’ll face:

  • Detecting correct baud rate (512/1200/2400)
  • Sync word alignment
  • BCH correction implementation

Real World Outcome

$ python pocsag.py --freq 929.6625M
[POCSAG] RIC: 123456 | MSG: "CALL BACK"
[POCSAG] Baud: 1200 | Mode: Alphanumeric | Errors corrected: 2

The Core Question You’re Answering

“How do you recover text from a low-bitrate FSK paging signal?”

Concepts You Must Understand First

  1. FSK demodulation and slicing
    • How do you choose a decision threshold in noisy data?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 12
  2. POCSAG framing
    • What is the sync word and how is it detected?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 8
  3. BCH error correction
    • How many errors can BCH(31,21) correct?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 8
  4. Baud rate detection
    • How do you estimate 512/1200/2400 bps from zero crossings?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 12

Questions to Guide Your Design

  • Which baud rate is in use and how will you detect it?
  • How will you align to the sync word?
  • How do you implement BCH correction efficiently?

Thinking Exercise

Simulate FSK with two tones and add noise. What does the discriminator output look like?

The Interview Questions They’ll Ask

  1. Why are multiple baud rates used?
  2. What does BCH correct that CRC cannot?
  3. How does FSK differ from PSK in robustness?
  4. Why is preamble detection critical?

Hints in Layers

  1. Use an FM discriminator to recover baseband.
    baud_est = estimate_baud_from_zero_crossings(fsk)
    
  2. Detect the POCSAG sync word to align frames.
  3. Use BCH tables to correct errors in each codeword.
  4. Decode ASCII or numeric messages.

Books That Will Help

Topic Book Chapter
FSK “Understanding Digital Signal Processing” (Lyons) Ch. 12
Error correction “Digital Communications” (Proakis) Ch. 8

Common Pitfalls & Debugging

Problem: Garbled messages

  • Why: Wrong baud rate
  • Fix: Auto-detect symbol rate by measuring zero crossings

Problem: No sync found

  • Why: Low SNR or wrong frequency
  • Fix: Improve SNR or adjust center frequency

Problem: Random characters or partial messages

  • Why: Wrong baud rate or bit alignment
  • Fix: Re-run baud detection and verify sync alignment

Definition of Done

  • Correctly detect baud rate
  • Decode at least one message with valid BCH
  • Output decoded text to terminal

Project 9: The Mobile Whisper (GSM BCCH Decoder - Education Only)

  • Main Programming Language: C/Python
  • Alternative Programming Languages: Rust
  • Coolness Level: Level 5 - Cellular Baseband Insight
  • Business Potential: 6/10 (research tools)
  • Difficulty: Level 5 (Master)
  • Knowledge Area: GMSK bursts, TDMA framing, convolutional coding
  • Software or Tool: numpy, custom Viterbi
  • Main Book: “Digital Communications” (Proakis)

What you’ll build: A decoder that extracts GSM broadcast control channel (BCCH) data without decryption.

Why it teaches SDR: GSM is a full-stack digital system with burst timing, training sequences, and coding.

Core challenges you’ll face:

  • Burst synchronization and training sequence correlation
  • GMSK demodulation with timing recovery
  • Convolutional decoding and deinterleaving

Real World Outcome

$ ./gsm_sniffer --arfcn 45 --band 900
[GSM] Cell ID: 0x1A2B | LAC: 0x03F2 | MCC/MNC: 310/260
[GSM] BCCH ARFCN: 45 | BSIC: 12 | RXLEV: -63 dBm

The Core Question You’re Answering

“How do you synchronize to a TDMA burst and recover control channel data?”

Concepts You Must Understand First

  1. GMSK at 270.833 ksym/s (GSM symbol rate)
    • Why is the GSM symbol rate tied to 13 MHz clocking?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 5
  2. TDMA burst timing + training sequences
    • How do you detect the training sequence in a burst?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 7
  3. Convolutional coding + Viterbi
    • What is the constraint length for GSM BCCH coding?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 8
  4. Channel spacing and ARFCN mapping
    • How does 200 kHz spacing affect your capture bandwidth?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 8

Questions to Guide Your Design

  • How will you locate the GSM frequency and ARFCN?
  • How will you find burst timing and training sequence?
  • How will you implement or reuse a Viterbi decoder?

Thinking Exercise

Look at a GSM time slot diagram and explain why timing precision matters.

The Interview Questions They’ll Ask

  1. What is the purpose of the training sequence in GSM?
  2. Why does GSM use TDMA?
  3. What is the difference between BCCH and traffic channels?
  4. How does convolutional coding improve reliability?

Hints in Layers

  1. Capture a wideband IQ file around a known GSM carrier.
  2. Correlate with the training sequence to find burst timing.
    idx = np.argmax(np.abs(np.correlate(burst, training_seq, mode=\"valid\")))
    
  3. Demodulate GMSK and extract bursts.
  4. Apply deinterleaving and Viterbi decoding.

Books That Will Help

Topic Book Chapter
GSM PHY “Digital Communications” (Proakis) Ch. 5-7
Coding “Digital Communications” (Proakis) Ch. 8

Common Pitfalls & Debugging

Problem: Wrong timing, no valid bursts

  • Why: Incorrect sample rate or frequency offset
  • Fix: Calibrate PPM and resample precisely

Problem: Viterbi output nonsense

  • Why: Wrong interleaving or soft-decision scaling
  • Fix: Verify GSM burst format and scaling

Problem: Decodes work in file replay but not live

  • Why: Clock drift or sample rate mismatch
  • Fix: Apply PPM correction and resample to exact GSM rate

Definition of Done

  • Extract BCCH frames
  • Decode MCC/MNC and cell ID
  • Document all legal and ethical constraints

Project 10: The Space Compass (GPS L1 C/A Tracker)

  • Main Programming Language: C
  • Alternative Programming Languages: Python (slower)
  • Coolness Level: Level 5 - Signals Below the Noise Floor
  • Business Potential: 7/10 (navigation tools)
  • Difficulty: Level 5 (Master)
  • Knowledge Area: PRN correlation, acquisition, tracking loops
  • Software or Tool: FFT libraries, SIMD (optional)
  • Main Book: “Understanding Digital Signal Processing” (Lyons)

What you’ll build: A GPS L1 C/A acquisition and tracking pipeline that identifies PRN, code phase, and Doppler.

Why it teaches SDR: GPS signals are below the noise floor. Decoding them requires correlation, Doppler search, and tracking loops.

Core challenges you’ll face:

  • Efficient PRN generation
  • Coherent and non-coherent integration
  • DLL/PLL tracking stability

Real World Outcome

$ ./gps_acquire --iq gps_l1.iq --fs 5.0M
[GPS] PRN 3 acquired | Doppler: -2450 Hz | Code phase: 512
[GPS] PRN 19 acquired | Doppler: +1020 Hz | Code phase: 184
[GPS] C/N0: 34 dB-Hz | Tracking: stable

The Core Question You’re Answering

“How do you detect a signal that is hidden below the noise floor?”

Concepts You Must Understand First

  1. GPS L1 frequency (1575.42 MHz) and C/A code rate (1.023 Mcps)
    • Why is the C/A code length 1023 chips?
    • Book Reference: “Software-Defined Radio for Engineers” - Ch. 9
  2. Correlation and processing gain
    • How does coherent integration improve detection?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 10
  3. Acquisition search across Doppler bins
    • What Doppler range should you search at L1?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 6
  4. Tracking loops (DLL/PLL)
    • How do DLL and PLL interact during tracking?
    • Book Reference: “Digital Communications” (Proakis) - Ch. 6

Questions to Guide Your Design

  • What Doppler range and bin spacing will you search?
  • How long will you integrate to improve SNR?
  • How will you generate PRN codes efficiently?

Thinking Exercise

Estimate the processing gain for a 1 ms correlation window.

The Interview Questions They’ll Ask

  1. Why is GPS below the noise floor?
  2. What is the purpose of a delay-lock loop?
  3. How does Doppler affect code phase?
  4. Why is coherent integration limited?

Hints in Layers

  1. Mix the signal to baseband and apply a narrow filter.
  2. Generate PRN codes and perform FFT-based correlation.
    prn = generate_ca_code(prn_id)  # 1023 chips
    
  3. Search Doppler bins and look for correlation peaks.
  4. Once acquired, switch to tracking loops.

Books That Will Help

Topic Book Chapter
GPS SDR “Software-Defined Radio for Engineers” Ch. 9
DSP “Understanding Digital Signal Processing” (Lyons) Ch. 10

Common Pitfalls & Debugging

Problem: No correlation peaks

  • Why: Incorrect sample rate or IF
  • Fix: Verify capture settings and center frequency

Problem: Peaks but unstable tracking

  • Why: Loop bandwidth too narrow/wide
  • Fix: Tune DLL/PLL parameters

Problem: Wrong PRN reported intermittently

  • Why: False peaks due to short integration time
  • Fix: Increase coherent integration or average non-coherently

Definition of Done

  • Acquire at least one PRN in recorded data
  • Report Doppler and code phase correctly
  • Demonstrate stable tracking for >10 seconds

Project 11: Universal SDR Console (Capstone)

  • Main Programming Language: Python (UI) + C or Rust for hot DSP loops
  • Alternative Programming Languages: C++, Rust, Julia
  • Coolness Level: Level 5 - RF Workbench
  • Business Potential: 8/10 (spectrum monitoring, research tooling)
  • Difficulty: Level 5 (Master+)
  • Knowledge Area: System integration, streaming DSP, UX for signal analysis
  • Software or Tool: numpy, pyqtgraph, SoapySDR, sqlite (optional)
  • Main Book: “Clean Architecture” (Martin)

What you’ll build: A unified SDR application with a live waterfall, click-to-tune receivers, modular demodulators (AM/FM), and protocol decoders (ADS-B, AIS, RDS, NOAA APT, GPS). The console persists sessions, logs decoded frames, and supports replay from IQ files.

Why it teaches SDR: Integration is the hardest part. You must synchronize multi-rate pipelines, share buffers across decoders, manage CPU and latency, and design a UI that makes RF data understandable. This project forces you to build a real SDR workstation, not just isolated scripts.

Core challenges you’ll face:

  • Building a robust multi-rate DSP graph with shared IQ buffers
  • Keeping real-time UI responsive while running heavy decoders
  • Designing a clean plugin API for demodulators and decoders

Real World Outcome

What you will see:

  1. A full-screen waterfall with click-to-tune and marker readouts.
  2. Live audio output for AM/FM stations with de-emphasis control.
  3. Live decoder panels: ADS-B aircraft list, AIS ship table, RDS text, NOAA image, GPS PRN tracker.
  4. A session log with timestamps, SNR, CFO, and CRC status.

Command Line Outcome Example:

$ ./sdr_console --device soapy=rtlsdr --fs 2.4M --center 1090M
[UI] Waterfall running @ 30 FPS | RBW: 585.9 Hz
[RX] Center: 1090.000 MHz | Gain: 34 dB | PPM: -1.2
[ADS-B] 27 frames/min | CRC ok: 24 | CRC fail: 3
[AIS] Disabled (out of band)
[RDS] Disabled (out of band)

The Core Question You’re Answering

“How do you design one coherent SDR workstation that can tune, demodulate, and decode many signals without collapsing under real-time constraints?”

Answering this teaches you how to manage streaming data, time-sensitive DSP, and user interaction at the same time. It is the difference between a demo and a tool you can rely on.

Concepts You Must Understand First

  1. Multi-rate DSP graphs
    • How do you split a wideband stream into multiple narrowband pipelines?
    • Where do you decimate to avoid wasting CPU?
    • Book Reference: “Understanding Digital Signal Processing” (Lyons) - Ch. 10
  2. Real-time scheduling and buffering
    • What happens when a decoder falls behind?
    • How do you avoid buffer overruns and underruns?
    • Book Reference: “Computer Systems: A Programmer’s Perspective” - Ch. 2, 9
  3. UI instrumentation for DSP
    • What metrics help you debug in real time (SNR, CFO, symbol timing)?
    • How do you visualize confidence vs raw data?
    • Book Reference: “Clean Architecture” (Martin) - Ch. 10-12
  4. Plugin architecture
    • How do you register new decoders without editing core code?
    • How do you pass configuration and state safely?
    • Book Reference: “Design Patterns” (GoF) - Ch. 3-5

Questions to Guide Your Design

  1. Pipeline topology
    • What is the minimal set of sample rates you need to support all decoders?
    • Where do you place frequency translation and decimation?
  2. Performance and latency
    • How will you measure DSP CPU cost per block?
    • What happens if the waterfall falls behind the decoder?
  3. Decoder integration
    • How will you normalize outputs so different decoders look consistent?
    • How will you log and replay decoded frames?

Thinking Exercise

Design a pipeline that simultaneously decodes ADS-B (1090 MHz) and plays FM audio (100 MHz) from a single wideband capture. What sample rate do you need? How will you decimate and split the stream? Sketch the graph before writing code.

The Interview Questions They’ll Ask

  1. How would you architect an SDR app to avoid UI stalls under heavy DSP load?
  2. What is the difference between a pull-based and push-based DSP pipeline?
  3. How do you normalize power measurements across different sample rates?
  4. How would you add a new decoder without touching existing code?
  5. What metrics would you log to debug failed decodes in production?

Hints in Layers

Hint 1: Separate threads early

# UI thread: render only
# DSP thread: read IQ, process, emit events

Hint 2: Use ring buffers between stages

rb = RingBuffer(size=1<<20)  # power of 2 for fast modulo

Hint 3: Standardize decoder outputs

event = {"ts": ts, "snr": snr, "type": "adsb", "payload": msg}

Hint 4: Build a replay mode

$ ./sdr_console --input capture.iq --replay 1.0

Books That Will Help

Topic Book Chapter
Streaming systems “Computer Systems: A Programmer’s Perspective” Ch. 2, 9
Architecture “Clean Architecture” (Martin) Ch. 10-12
Concurrency “Operating Systems: Three Easy Pieces” Ch. 26-30
DSP integration “Understanding Digital Signal Processing” (Lyons) Ch. 10-13

Common Pitfalls & Debugging

Problem: UI freezes when decoders start

  • Why: DSP running on the UI thread
  • Fix: Split UI and DSP threads; use message queues
  • Quick test: Disable decoders; UI should stay at target FPS

Problem: Waterfall looks fine but decoders fail

  • Why: Mixed sample rate assumptions between modules
  • Fix: Define a shared metadata header for each stream
  • Quick test: Log sample rate and center frequency at every stage

Problem: Audio drops or stutters

  • Why: Buffer underruns or resampling glitches
  • Fix: Add a small jitter buffer and verify resampler state
  • Quick test: Measure buffer fill level over time

Definition of Done

  • Live waterfall updates at >= 20 FPS while at least one decoder runs
  • Click-to-tune works across AM, FM, and ADS-B without restarting
  • Decoder outputs are normalized into a shared event schema
  • Session logs include SNR, CFO, and CRC status for every frame