Home > On-Demand Archives > Talks >

An Acoustic Camera

Chris Bore - Watch Now - DSP Online Conference 2021 - Duration: 22:04

Phased array radar and sonar are common applications of signal processing. In this talk Chris Bore outlines the application of similar techniques to make a simple acoustic camera that renders a sound scene visually.
M↓ MARKDOWN HELP
italicssurround text with
*asterisks*
boldsurround text with
**two asterisks**
hyperlink
[hyperlink](https://example.com)
or just a bare URL
code
surround text with
`backticks`
strikethroughsurround text with
~~two tilde characters~~
quote
prefix with
>

fred_h
Score: 0 | 3 years ago | no reply

Hello Chris,

I enjoyed your acoustic camera. presentation... In the not so distance past i took a leave of absence from school an did the phased array beam formers on the B2 Bomber... I was a kid in a candy store. Different frequency bands and different bandwidth... but same ideas... In fact, we do make acoustic camera.... they are called side looking sonars.... We located the Titanic as well as other ship wrecks using the acoustic cameras.... many fishermen use the acoustic cameras as depth finders when they are out doing what fisherman do... fish!
The other acoustic camera is the geo-phone (and hydrophane) arrays that certain communities use to probe the ground for likely oil deposits. and then there are the acoustic arrays used for earth quake detection and in a different time for nuclear events.

We have used a similar experiment to image a moving toy train measuring distance to engine at it moved in our field of view... I use to use a speaker and two microphones in class to show what a wavelength looked like in space... I would sample the output of the two microphones and move them apart while cross correlating the two time signals... I would display the phasor showing the transform of the single tone and watch it rotate as I moved the microphones further apart. When the phasor rotated a whole circle I would say... there is a wavelength! I would repeat using white noise and show the echoes from differnt surfaces of the class room.

Again,,, nice talk....

fred h

napierm
Score: 0 | 3 years ago | no reply

Thankyou for the presentation. Nice!
Another option for the microphones would be the ones that have an I2S interface. They are L/R so what you can wind up with is one input wire for every 2 microphones. A/D already done. An FPGA like an Artix 200T or a Zynq has plenty of DSP and memory blocks. I have the beginning of a system for my Digital class based on the PYNQ-Z2 and the AIY Voice Kit hat for RPi. Nothing great about it, just an example of interfacing in raw Verilog in a minimalist style. https://aiyprojects.readthedocs.io/en/latest/voice.html
https://www.adafruit.com/product/3421?gclid=CjwKCAjwkvWKBhB4EiwA-GHjFpPFKRZFfWpKdQjZuHVCuV6aYMxqAxy4JJ1f37vWFkgmrnLcNshT5xoCX0MQAvD_BwE

Darkphibre
Score: 0 | 3 years ago | no reply

Chris, a question: You say at the very end that you don't care about the distance, it's the direction that you care about. Maybe the question you were answering wasn't fully stated, but I'd thought the point of coloration in the soundscape to reflect distance was one of the driving factors, and the reason for looking at phase shift. Could you elaborate?

tyoung77
Score: 0 | 3 years ago | no reply

Great start IMHO! Been thinking about the same thing as I sit in my back yard listening to birds in the trees. Did you consider setting the distance between elements of the phased array based on the frequency range. You may want to consider beginning with a linear array and then progress to a two dimensional array.

OUR PARTNERS