Home > Q&A Sessions >

Live Q&A - A Practical Guide to Audio Distortion

Samuel Fischmann- Recording Soon Available - DSP Online Conference 2024

Live Q&A - A Practical Guide to Audio Distortion
Samuel Fischmann
Live Q&A with Samuel Fischmann for the talk titled A Practical Guide to Audio Distortion
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

RichardLyons
Score: 0 | 1 week ago | no reply

Hello Samuel. Thank you for showing the schematic of a vacuum tube amplifier. That brought back pleasant memories and warmed my heart. I once read that guitarist Keith Richards, of The Rolling Stones, prefers vacuum tube amplifiers over transistor (solid state) amplifiers. (Whenever I think of Keith Richards I always wonder why doctors and scientists don't study his body to figure out why he's still alive.)

Leandro
Score: 0 | 2 weeks ago | 1 reply

Hey Samuel,
Great talk thanks a lot for sharing your experience! I remember back in the days I used to had a book called Digital Audio Effects (DAFX) and there was even a conference about it! I was amazed about the things you can do with simple math (or not so simple) over the samples.
Thanks again!

Sam FischmannSpeaker
Score: 0 | 2 weeks ago | no reply

Thanks for listening Leandro! The DAFX conference still exists, and people submit papers every year! People like me read those papers and see if any new techniques can be applied to further the products we make:

https://www.dafx.de/

JoaoP
Score: 0 | 2 weeks ago | 1 reply

Thank you very much for your presentation.

Sorry if these are lame questions.

When you mention using LPF between oversampled nonlinear processes, do you mean several across a chain, one at a specific point?

Are these LPF implemented in tandem with spectral results i.e., they adapt dynamically as the frequencies shift on an audio piece or are they static?

When you say "ignore pure sine wave tests, and instead listen to what happens to transients in real material”, how would you typically engage with these transients?

Sam FischmannSpeaker
Score: 0 | 2 weeks ago | 1 reply

Several across the chain! Essentially, I mean that every time you do non-linear processing you get higher intermodulation products and harmonic products, getting closer to (or in many cases bouncing off the aliasing ceiling). Putting a LPF between each non-linear process eliminates this new material to prevent further buildup in frequency ranges that you may not have intentionally wanted to add energy to; it really is going to depend on the purpose of the signal chain. An exciter, for example, might want to keep them! Generally, the LPF should be static according to the process.

Audio is SO perceptual, the only way to engage with the transients is to listen. The harmonic distortion measurement will not tell you what this sounds like. For example if you look at a graph for cubing a signal, you'll notice that the slope gets steeper as amplitude values go up. This means that peaks/transients will be emphasized. A cubic clipper scales and subtracts this value, which means that peaks/transients will be de-emphasized, which you can also see in the graph. In BOTH cases, you'd see only a third harmonic on a harmonic distortion chart... which again shows how little this chart tells you about what something might sound like.

JoaoP
Score: 0 | 2 weeks ago | no reply

Will try to try it out. Thank you!

RG
Score: 0 | 2 weeks ago | 1 reply

I really enjoyed your presentation, especially the approach of using simpler mathematics to explain the concepts. I also liked the listening exercises presented alongside visual/waveform analysis.

There is one small Issue I noticed with your presentation: You refer to plots of the spectral magnitude as 'spectrograms' repeatedly; these plots are not the same, nor do they convey the same information. Spectral plots (whether magnitude, phase, real, imaginary, or some combination thereof) convey only frequency-domain information. Spectrograms convey both time-domain and frequency-domain information about the signal, with inherent resolution tradeoffs due to the inverse proportionality of time and frequency.

Sam FischmannSpeaker
Score: 0 | 2 weeks ago | no reply

Thanks for the feedback @RG, understood about the nomenclature! I know exactly what you mean, and I'll try to be more careful with 'spectogram' vs something like 'spectral plot' in the future.

SlightlyChaotic
Score: 0 | 2 weeks ago | 1 reply

Thanks very much! I was one of those you mentioned who thought harmonics were the dominiant type of distortion. No longer :)
I also liked your ad-lib type remarks; it added to the enjoyment of listening to the talk.

Sam FischmannSpeaker
Score: 0 | 2 weeks ago | no reply

Great! I wish I had made it clear that I also thought the same thing until I started making audio plugins and doing lots of null testing. This talk was born out of my journey learning the WHY.

Gary
Score: 0 | 2 weeks ago | 1 reply

Absolutely fantastic talk, and I'm only 1/2 of the way through! I've already learned so much about audio processing (I typically deal with standard comm signals). Well done!

Sam FischmannSpeaker
Score: 0 | 2 weeks ago | no reply

Thanks Gary! Great to hear from somebody working on comms signals, it's really wild how different domains need such different abstractions to get what we want out of signal processing.

OUR PARTNERS