Home > Q&A Sessions >
Live Q&A - A Practical Guide to Audio Distortion
Samuel Fischmann- Recording Soon Available - DSP Online Conference 2024
Several across the chain! Essentially, I mean that every time you do non-linear processing you get higher intermodulation products and harmonic products, getting closer to (or in many cases bouncing off the aliasing ceiling). Putting a LPF between each non-linear process eliminates this new material to prevent further buildup in frequency ranges that you may not have intentionally wanted to add energy to; it really is going to depend on the purpose of the signal chain. An exciter, for example, might want to keep them! Generally, the LPF should be static according to the process.
Audio is SO perceptual, the only way to engage with the transients is to listen. The harmonic distortion measurement will not tell you what this sounds like. For example if you look at a graph for cubing a signal, you'll notice that the slope gets steeper as amplitude values go up. This means that peaks/transients will be emphasized. A cubic clipper scales and subtracts this value, which means that peaks/transients will be de-emphasized, which you can also see in the graph. In BOTH cases, you'd see only a third harmonic on a harmonic distortion chart... which again shows how little this chart tells you about what something might sound like.
I really enjoyed your presentation, especially the approach of using simpler mathematics to explain the concepts. I also liked the listening exercises presented alongside visual/waveform analysis.
There is one small Issue I noticed with your presentation: You refer to plots of the spectral magnitude as 'spectrograms' repeatedly; these plots are not the same, nor do they convey the same information. Spectral plots (whether magnitude, phase, real, imaginary, or some combination thereof) convey only frequency-domain information. Spectrograms convey both time-domain and frequency-domain information about the signal, with inherent resolution tradeoffs due to the inverse proportionality of time and frequency.
Thanks for the feedback @RG, understood about the nomenclature! I know exactly what you mean, and I'll try to be more careful with 'spectogram' vs something like 'spectral plot' in the future.
Thanks very much! I was one of those you mentioned who thought harmonics were the dominiant type of distortion. No longer :)
I also liked your ad-lib type remarks; it added to the enjoyment of listening to the talk.
Great! I wish I had made it clear that I also thought the same thing until I started making audio plugins and doing lots of null testing. This talk was born out of my journey learning the WHY.
Absolutely fantastic talk, and I'm only 1/2 of the way through! I've already learned so much about audio processing (I typically deal with standard comm signals). Well done!
Thanks Gary! Great to hear from somebody working on comms signals, it's really wild how different domains need such different abstractions to get what we want out of signal processing.
Thank you very much for your presentation.
Sorry if these are lame questions.
When you mention using LPF between oversampled nonlinear processes, do you mean several across a chain, one at a specific point?
Are these LPF implemented in tandem with spectral results i.e., they adapt dynamically as the frequencies shift on an audio piece or are they static?
When you say "ignore pure sine wave tests, and instead listen to what happens to transients in real material”, how would you typically engage with these transients?