Home > On-Demand Archives > Talks >
Building A Tensorflow Lite Neural Network Vibration Classifier, With A Little Help From DSP
John Edwards - Watch Now - DSP Online Conference 2022 - Duration: 01:03:14
The key to developing an efficient vibration mode classifier is the use of DSP algorithms to optimize the task.
The DSP functions will pre-process the data to allow a simpler Neural Network to be used for the classification.
The CNN will use a Tensorflow model that is trained on the supplied data, it will then use the model to classify new data.
We will also include the code to generate and test both the Tensorflow and Tensorflow Lite models.
Once generated, we will test the Tensorflow Lite model to ensure it classifies the data as well as the floating point model.
This guide was created with the help of AI, based on the presentation's transcript. Its goal is to give you useful context and background so you can get the most out of the session.
What this presentation is about and why it matters
This talk walks through a compact, practical workflow for classifying vibration modes of rotating machinery by combining classical digital signal processing (DSP) with a small convolutional neural network (CNN) that can be converted to TensorFlow Lite for embedded deployment. Instead of throwing raw time-domain data at a large neural net, the speaker demonstrates sensible pre-processing (windowing, FFT, magnitude and dB scaling) to expose the frequency-domain features that actually distinguish vibration signatures.
Why this matters: in real industrial and embedded systems you care about compute, memory, power, and predictability. Good DSP front-ends reduce input dimensionality and make the learning problem easier, which lets you use far smaller, lower-power models on microcontrollers. If you design or deploy condition-monitoring, predictive-maintenance, or low-power sensing systems, the practical trade-offs in this talk are directly applicable.
Who will benefit the most from this presentation
- Embedded and firmware engineers who must run ML models on microcontrollers and need to reduce MIPS and memory footprint.
- Signal processing engineers learning how to combine traditional DSP with machine learning for time-series classification.
- Data scientists or ML engineers who want to convert a floating-point model to a quantized TensorFlow Lite model and verify that accuracy survives quantization.
- Students and researchers looking for a compact, practical example linking framing, windows, FFTs, and simple CNNs for spectrogram-like inputs.
What you need to know
Basic familiarity with the following concepts will help you follow the demonstrations and the code walkthrough:
- Sampling and Nyquist: The data examples use a 16 kHz sample rate. Remember Nyquist: the useful frequency band is 0 to $F_s/2$.
- Framing: Signals are split into short overlapping or non-overlapping frames. This talk uses 256-sample frames (about 16 ms at 16 kHz). Frame length controls time vs frequency resolution.
- Frequency bin width: FFT resolution is approximately $\Delta f = F_s/N$. For $F_s=16\,000$ and $N=256$, $\Delta f = 62.5\,$Hz—this is the spacing between adjacent FFT bins.
- Windowing: Multiplying each frame by a smooth window (e.g., Blackman or Hanning) reduces spectral leakage before the FFT.
- FFT and magnitude: The FFT converts a time frame into frequency bins. For real signals you typically keep only the first $N/2$ bins (0..Nyquist).
- dB scaling: Converting magnitudes to logarithmic dB scale often makes spectral features more separable for classifiers.
- Data splitting: Train/validation/test splits and shuffling are crucial to assess generalization and to avoid data leakage.
- Model basics: The demo uses a very small CNN (a 1D convolution + dense output) and common training choices (Adam optimizer, sparse categorical loss, small number of epochs when data are abundant).
- Quantization & TFLite: Converting to an 8-bit quantized TensorFlow Lite model requires a representative dataset so the converter can compute appropriate scaling ranges; the talk shows validating the quantized model in Python.
- Evaluation: Confusion matrices, accuracy, and inspecting per-class errors help you judge how robust the system is (and whether you need more DSP, data augmentation, or a larger model).
Glossary
- Frame: A short contiguous block of samples taken from the time-series for analysis.
- Window (e.g., Blackman): A taper applied to each frame to reduce spectral leakage when computing the FFT.
- FFT (Fast Fourier Transform): Efficient algorithm to compute the discrete Fourier transform (DFT) of a frame.
- Nyquist frequency: Half the sampling rate; maximum unambiguous frequency component in the sampled signal.
- Spectrogram: A time-frequency representation made by plotting successive FFT magnitudes over time.
- Quantization: Mapping floating-point values to discrete integer levels (e.g., 8-bit) for embedded inference.
- TFLite (TensorFlow Lite): A framework for running TensorFlow models on resource-constrained devices.
- Representative dataset: Small sample of input data used by the quantizer to calibrate ranges for quantization.
- Confusion matrix: Table showing predicted vs actual class counts—useful to spot systematic errors.
- ReLU (Rectified Linear Unit): A common activation function used in hidden layers of neural networks.
Final notes — why you should watch
John Edwards presents a concise, hands-on example that connects textbook DSP to an end-to-end machine learning deployment pipeline. The value here is practical: you will see why careful DSP reduces model size and runtime cost, how to prepare and shape data for Keras, and how to validate that an 8-bit TFLite model still performs well. If you want real, deployable recipes (not just high-level theory) for vibration classification on constrained hardware, this talk is a friendly, well-structured place to start.
Hi Kelly,
Thank you very much for your kind words.
Unfortunately, I'm not aware of any material of that kind, I guess because it is such a new topic of interest.
The best thing to do is search for published articles. This is a good example, although it uses Wavelets rather than the FFT - https://www.hindawi.com/journals/sv/2020/1650270/
Good luck in your search.
Best regards,
John
Please do submit any questions here and I will be glad to answer them.
I'll be online straight after the video finishes
Source code and test datasets can be downloaded from here: https://github.com/Numerix-DSP/DSP_And_ML_Examples

Hi John, nice talk. Where would I find tutorial background information that would help with design tradeoffs?