Home > On-Demand Archives > Talks >
Software-Defined Radio: Principles and Applications
Travis Collins - Watch Now - DSP Online Conference 2021 - Duration: 49:21
Software-defined radio (SDR) has transitioned from primarily a research topic to a common tool used by practitioners in many fields of RF. Made possible through hardware integrations, economics, and the growth of processing power. Making SDR devices accessible to the masses and providing a high configuration ceiling for advanced users.
In this talk, we will discuss the evolution of SDR and the key driving forces and methodologies behind both the hardware and software ecosystems. We will provide insights into the philosophies of current and historical SDR architectures. This will include several examples and demos with modern devices like transceivers and RF data converters. Throughout the different discussions, the talk will focus on the usage models and practical workflows that have been developed over the years to leverage the flexibility of SDRs. Touching upon and connecting the complex tooling for FPGAs, embedded processors, and high-performance x86 that are used in applications like communications, radar, and instrumentation. Finally, an outlook upon the next frontier of SDR in the space of Direct-RF and its related challenges will be discussed.
In general, this talk will provide a solid foundation to those new to SDR and provide context for the next generation of solutions for seasoned professionals. The talk will also connect the dots among the sea of software and hardware that has flourished over the last few decades for SDR.
This guide was created with the help of AI, based on the presentation's transcript. Its goal is to give you useful context and background so you can get the most out of the session.
What this presentation is about and why it matters
This talk surveys the practical landscape of modern software-defined radio (SDR): how radios moved from analog boxes and offline analysis to highly integrated transceivers, how the software and hardware pieces fit together, and what the next generation — "direct‑RF" — looks like. For engineers in signal processing or RF systems, the subject matters because SDR blurs traditional boundaries between analog RF front ends, high‑speed data converters (ADCs/DACs), FPGAs, and host software. Understanding those tradeoffs is essential when you design a receiver or transmitter, choose hardware for a project, or move algorithms from simulation into real-time embedded systems.
Who will benefit the most from this presentation
- DSP engineers who are starting to work with real RF hardware and need to map theory to practice.
- RF engineers who want to understand how digital techniques (calibration, digital downconversion, etc.) change front‑end design.
- Students and researchers using SDR platforms (PlutoSDR, USRP, BladeRF, etc.) for prototyping communications or radar systems.
- Embedded/Firmware engineers who must partition work across FPGAs, microcontrollers, and host processors.
- Hobbyists who want to move beyond GUI listeners to building reproducible waveforms and experiments.
What you need to know
This talk assumes a basic grounding in signals and systems; the following concepts will help you get more from the presentation:
- Frequency translation / mixing: mixing shifts signal spectra. For real mixers: $\cos(\omega t)\cos(\omega_0 t)=\tfrac{1}{2}[\cos((\omega+\omega_0)t)+\cos((\omega-\omega_0)t)]$, so a single multiplication creates sum and difference components. Complex mixing via $e^{j\omega_0 t}$ produces a single-sided shift in the complex baseband view.
- I/Q sampling and complex baseband: zero‑IF receivers use separate I (in‑phase) and Q (quadrature) channels to preserve phase. A complex sample represents amplitude and phase and lets digital algorithms demodulate modern waveforms.
- Receiver architectures: know the tradeoffs between superheterodyne (one or more analog IF stages with image filtering), low‑IF/zero‑IF (single conversion into low or baseband with digital correction), and direct‑RF (sample at RF, move all filtering and mixing into digital domain).
- ADC/DAC performance limits: sampling rate, effective number of bits (ENOB), input bandwidth, and dynamic range determine what you can digitize. Direct‑RF pushes ADC sampling into gigasamples/second and increases processing needs.
- Aliasing and Nyquist zones: aliasing is usually undesirable, but direct‑RF systems sometimes exploit higher Nyquist zones intentionally. Understand how sampling rate places signals into folded frequency bins.
- IQ imbalance, DC offsets, LO leakage: nonidealities in mixers and converters produce images and bias terms. Modern transceivers use on‑chip calibration, digital correction, or state machines to mitigate those effects.
- Digital downconversion and decimation: after a wideband ADC you typically use digital mixers, filters, and decimators to extract narrowband signals and reduce data rates sent to the host.
- Software and tooling: high‑level prototyping (GNU Radio, MATLAB, Python) vs. real‑time embedded implementations (FPGA HDL, HLS, generated IP). Know when you need precise timing (FPGA) versus algorithm exploration (host).
Glossary
- Software‑Defined Radio (SDR): a radio where components that were traditionally implemented in hardware (mixers, filters, modulators) are instead implemented in software or programmable logic.
- Mixer: a nonlinear device that multiplies signals with an LO to translate frequency content (produces sum and difference frequencies).
- Local Oscillator (LO): a stable sinusoid used by mixers to shift signals in frequency.
- I/Q (Quadrature): two orthogonal channels (in‑phase and quadrature) representing a complex baseband sample.
- Zero‑IF (Direct Conversion): receiver architecture that translates the RF signal directly to baseband with I/Q demodulation.
- Superheterodyne: traditional architecture using one or more IF stages and analog filters to reject images and select channels.
- Direct‑RF: architecture that samples RF directly with very fast ADCs so frequency translation happens digitally.
- ADC / DAC: analog‑to‑digital and digital‑to‑analog converters; key specs include sampling rate and dynamic range.
- Nyquist Zone / Aliasing: the frequency folding behavior determined by sampling rate; signals above Fs/2 fold into lower bands unless handled intentionally.
- FPGA: field‑programmable gate array used for low‑latency, cycle‑accurate DSP and tight timing control in SDR systems.
Final notes
Travis Collins gives a clear, practice‑oriented tour of SDR that balances historical context, hardware realities, and software choices. If you want to bridge the gap between theory and deployable radio systems — from hobbyist experiments to production baseband racks — this presentation provides a concise roadmap. Watch it to see real examples of calibration, to better understand when to push work into FPGAs or ICs, and to get a feel for where SDR is heading with direct‑RF architectures.
The "Low IF" digital radio architectures do not suffer from IQ imbalance and DC offset issues. Those issues apply only to "Zero IF" architectures.
Zero IF is the AD9361/AD9364 architecture that's why there was a discussion about this.
Question on slide 14 related to mixing in an FM radio - when I move the dial to change my station, is that adjusting the RF Filter, and is the local oscillator always constant?
Hello, thank you Travis for this lecture. I am interested how quadrature tracking, RF DC and BB DC corrections work in practice. Are those algorithms are implemented in analog or digital domain? If in digital domain is it done in FPGA or in CPU? Any guidance would be appreciated.
Cheers
Oh, yeah, and same as @napierm, I'd like the slides if you can put them out for download.
Thanks, Travis, I periodically heard about SDR's but never had the chance to look into what they were, and the little blurbs I read online, well I just had trouble understanding them. This was very clear, and the nuances to the different types - as well as connecting back to the history - was helpful.
Will you post your slides?

As a follow-up to this talk, here's a classic talk by Marc Ettus where he uses a GNU-Radio model to simulate various I/Q imbalances
https://youtu.be/PNMOwhEHE6w