Home > On-Demand Archives > Talks >

Bending Sound to Suit the Musician

David McClain - Watch Now - DSP Online Conference 2023 - Duration: 44:28

Bending Sound to Suit the Musician
David McClain
Many musicians around the world have developed hearing impairment (too loud music, industrial noise, etc.). Hearing aids only help speech perception but badly distort the timbres of musical instruments. Using DSP we can pre-warp sounds so that music playback sounds correct to them, instead of turning oboes into muted jazz trumpet sounds. The secret is understanding human loudness perception and performing spectral multi-channel nonlinear compression to compensate for degraded hearing. I aim to describe how our perception operates and then how to easily perform the nonlinear compression needed to compensate sensioneural hearing impairment.
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

JohnP
Score: 0 | 1 year ago | 1 reply

Thank you for your presentation, most interesting. I do lots of work for the Govt too, and would have enjoyed working on muscle cars. At the turn-on sessions of cochlear implants, recipients get to hear high frequencies often like never before. However they complain of robotic-sounding speech like that of a ring modulator. The audiologist assures them that their brain will get used to the stimulation, recognizing voices and sounds and their life will improve. Their world becomes much more noisy, confusing and exhausting, but at least they are not missing out. They often remove the external "audio processor" for a break and relax using a prior limited hearing aid. In auditoriums, the mush can be even worse. I wonder if such tools like Crescendo might ease that effort, packaged on the necessarily-limited processors and even if written in LISP ?

DBMcClainSpeaker
Score: 0 | 1 year ago | no reply

Hi John,
Thanks for chiming in. Crescendo itself is not written in Lisp. I used Lisp to help write the high performance C/C++ code that is Crescendo. Just the same, in the early days only a DSP had the horsepower to run the Crescendo engine. Today, measured on my M1 iMac computer, it takes less than 2.5% of CPU capacity.

The main quest for Crescendo has been accurate restoration of the harmonic content of musical sound, so that oboes continue to sound more like they should, instead of being transformed into Miles Davis muted jazz trumpets. And to that end, Crescendo performs astonishingly well.

But Crescendo does not have many of the features of hearing aids - those solve a different problem - helping to comprehend human speech. And so hearing aids take great liberties in producing, often intentional harmonic distortion, as well as eroding drone background sounds. Both of those are harmful to music, but often help in discerning the spoken word. Case in point is the limited bandwidth, high-pass nature of telephone conversations. Without the bass it is often easier to comprehend. But that would be disastrous to music.

Just the same, I use Crescendo for myself 24/7, and with it I can hear my own sibilance and fricative and glottal sounds that I cannot hear without aid. And I use it right now during the conference to help me hear everyone. But Crescendo's main aim is accurate musical restoration, not speech.

Cheeers,

  • DM

I should add, that today's hearing assistance technology is mostly focused on fixed-width frequency bins - WOLA Filter-banks, which are close cousins of FFT's. Those fixed-width bandpass filters are not well matched to our hearing. And so when you see them brag of 7 bands, or 13 bands, those are 7 or 13 FFT bins, poorly matched to our hearing, and typically only cover 500 Hz to 5 kHz.

OUR PARTNERS