Home > On-Demand Archives > Talks >

Deep Learning, Wireless Communications, and Signal Processing: Bridging the Gap

Dheeraj Sharma - Watch Now - DSP Online Conference 2023 - Duration: 31:40

Deep Learning, Wireless Communications, and Signal Processing: Bridging the Gap
Dheeraj Sharma
The talk explores the role of Deep Learning in enhancing wireless communications, specifically how it helps optimize signal processing, increase efficiency, and address challenges like signal interference and latency. Through real-world examples and forward-looking perspectives, the talk illustrates how the synergistic integration of these technologies is forging the path for the future of communication.

This guide was created with the help of AI, based on the presentation's transcript. Its goal is to give you useful context and background so you can get the most out of the session.

What this presentation is about and why it matters

This talk explores how modern deep learning methods can be applied to problems in wireless communications and classical signal processing — and how the three areas feed each other. The speaker shows concrete examples: transforming multivariate time‑series wireless measurements (Channel State Information, CSI) into image-like representations, then using image-based deep networks for human activity recognition; and drawing connections between optimization techniques used in DSP/wireless (like LMS adaptive filters) and stochastic gradient methods used in deep learning.

Why it matters: engineers working on radios, sensors, and embedded signal chains increasingly face complex, data‑driven problems where classical analytical methods alone are insufficient. Being able to convert time‑series to representations that leverage powerful computer‑vision models, and understanding the shared math behind adaptive filters and SGD, gives practical tools for higher accuracy, faster prototyping, and better system-level performance — while also highlighting the pressing need for interpretability in safety‑critical systems.

Who will benefit the most from this presentation

  • Signal processing engineers curious about machine learning applications to time‑series data from radios and sensors.
  • Wireless communications researchers and practitioners exploring sensing from Wi‑Fi or other RF signals.
  • Machine learning engineers who want practical ways to apply vision models to non‑image signals.
  • Graduate students learning the connections between optimization algorithms in DSP and modern deep learning.
  • Product engineers who need intuition about explainability when deploying ML in communication systems.

What you need to know

Background that will make the talk easier to follow:

  • Time‑series basics: sampling, multivariate sequences, stationarity vs non‑stationarity, common noise behaviors in wireless channels.
  • Channel State Information (CSI): CSI captures amplitude and phase per subcarrier/antenna and is sensitive to moving reflectors (people). Intuitively, activity alters multipath and can be sensed via CSI variations.
  • Signal ↔ Image transforms: The talk uses transforms that map a 1‑D time‑series into a 2‑D matrix so that image models can be applied. Two examples are Markov Transition Fields (MTF) and Gramian Angular Fields (GAF/GASF/GADF).
  • Convolutional neural networks (CNNs): these exploit local spatial patterns in images. When a time series is transformed into an image, CNNs can learn patterns that represent temporal dynamics or correlations.
  • Optimization and gradients: be comfortable with the gradient‑descent update rule: $x_{\text{new}} = x_{\text{old}} - \alpha \nabla f(x_{\text{old}})$, where $\alpha$ is the learning rate. In machine learning the empirical loss often has the finite‑sum form $L(w)=\frac{1}{n}\sum_{i=1}^n l_i(w)$ and SGD approximates its gradient using mini‑batches.
  • LMS vs SGD: Least Mean Squares (LMS) is the classical adaptive filter update for minimizing mean squared error; it is mathematically a specific instance of stochastic gradient descent tailored to quadratic losses.
  • Explainable AI basics: techniques like Grad‑CAM produce heatmaps that highlight which parts of an input drove a network’s decision. Complement this with classic signal visualizations (spectrograms, histograms) to build interpretability.
  • Practical tooling: familiarity with Python, numpy, matplotlib, and higher‑level packages like pyts (for time‑series imaging), and a deep‑learning framework such as TensorFlow/Keras or PyTorch will help you reproduce examples.
  • Computational concerns: converting long CSI sequences to images can be memory‑intensive; tuning learning rates, mini‑batch sizes and regularization remain important to avoid unstable training.

Glossary

  • Deep learning: neural networks with many layers that learn hierarchical features from data; in this context used to classify or interpret signal‑derived images.
  • Signal processing: techniques to analyze, filter, and transform time‑series or frequency data (e.g., Fourier, filtering, adaptive filters).
  • Wireless communications: physical‑layer concepts and channel behaviors that govern how radio signals propagate and are received.
  • Channel State Information (CSI): per‑subcarrier amplitude/phase measurements that describe the instantaneous channel; sensitive to environment and motion.
  • Markov Transition Field (MTF): an image representation encoding transition probabilities between quantized states of a time series.
  • Gramian Angular Field (GAF): a polar‑coordinate based transform that encodes pairwise temporal correlations into a matrix (GASF/GADF variants).
  • Convolutional Neural Network (CNN): a deep model that learns spatially local filters (useful after mapping time series to images).
  • Stochastic Gradient Descent (SGD): optimization method that updates parameters using gradients from mini‑batches; central to training deep nets and related to LMS.
  • Explainable AI (XAI), Grad‑CAM: techniques that produce visual explanations (e.g., heatmaps) showing which input regions influenced a model’s output.

Final note — why you should watch

This presentation offers a practical, example‑driven bridge between familiar DSP/wireless concepts and modern deep learning practices. The speaker emphasizes both opportunities (e.g., transforming CSI to image domains for high‑accuracy classification) and pitfalls (notably interpretability). If you want a clear, hands‑on introduction to how signal processing techniques map to ML workflows — and why common optimization algorithms are essentially the same mathematical ideas wearing different names — this talk is a compact and useful guide. It’s well suited for practitioners who want actionable insights without losing sight of explainability and classical intuition.

M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

Thomas.Schaertel
Score: 0 | 2 years ago | 1 reply

Dear Dheeraj, thank you for your interesting talk using a practical application of Markov fields and DL. I enjoyed it very much. Do you offer the source code you showed in your presentation somewhere (GitHub)? I would be very interested to evaluate your steps on my own. Thank you
/Thomas

Stephane.Boucher
Score: 0 | 2 years ago | no reply

The source code can now be downloaded in the column on the left-hand side

JohnP
Score: 0 | 2 years ago | no reply

The approach of converting RF field complexity with (SGAF, DGAF, MTF tools) to an image and then using SGD, DL tools developed for image analysis, is an interesting take. Your comments regarding XAI are duly noted.