Home > On-Demand Archives > Talks >
Pragmatic Methods to Decide Filter Requirements
Chris Bore - Watch Now - DSP Online Conference 2020 - Duration: 50:25
While digital filter design – given a filter specification - is very well covered in numerous works, the essential practical question of how to arrive at the filter specification from the application is largely ignored: so the practicing engineer is faced with a well developed design methodology, but very little information to guide in specifying what to design. Similarly, there is little information available to guide decisions on the hardware platforms that are suitable to implement such a specification – including balancing issues such as numeric precision, speed, cost and power consumption – to assess whether the requirement may be met in a cost effective way: so the practicing engineer has limited guidance in deciding whether the application requirement may be met at all given practical constraints. The result is that engineers have limited ability to quickly assess whether a filtering requirement can be met: and digital filters very often under- or over-perform and are often implemented on needlessly costly or power-hungry hardware platforms. This talk addresses these twin gaps in the filter designer’s toolbox and outlines, with specific methods and examples, how the specification for a digital filter may be arrived at from consideration of the application aim and requirement. It also describes, again with specific methods and examples, how to arrive at a specification for the hardware platform necessary to implement such a filter. It does not address the issue of designing such a filter, which is very well covered in numerous texts elsewhere.
Hi Chris, thanks for the talk, I'm listening for the second time. I never thought beyond the FFTs I was told to implement until now! This is very eye-opening for me.
I have a mathematics background, but not an engineering one, so I have a question, if it's not too late. at about time 14:41 of your presentation. You take 7 mV RMS and 200 Ohms of resistance and come up with 35 uW of power. But the formula is (Vrms^2)/R = Watts. That tells me (0.007)(0.007)/200 = (49x10^-6)/(2x10^2) = 24.5x10^-8 W
Can you help me with this? Thanks.
Actually, looking more, it looks like a question of whether you square the Vrms, or not, to get power. There seems to be conflicting definitions of RMS on your slide, unless I am missing something.
This is one of those times when I 'thank' you for pointing out a mistake :-)
The equations on the slide are correct (P=Vrms^2/R) but the numbers are wrong. The arguments up to the numbers, and after the numbers, are valid but there is a disconnect where the equations are evaluated numerically. I don't know why this was but I will drill back through the slides from which I simplified this talk to find out.
'Thank you' :-)
Thanks a lot for this interesting presentation that brought me back to the basics.
I have a short question here, why did you choose to estimate an FIR filter rather than an cascade of biquads ?
I see you found the answer: yes, IIR filters aren't really susceptible to estimating quantization effects because they are recursive so the errors can grow, and are signal dependent. That doesn't mean you shouldn't consider or use them - just that you should simulate rather than estimate to quantify arithmetic precision - which puts you in the situation of having access to a computer, whereas I chose to think about a telephone or elevator conversation. The FIR being in general less efficient also means it is a safe estimate - again, apart from arithmetic precision.
I just heard the answer in the Q&A session. IIRs are indeed non linear and can be unpredictable but modern calculation techniques can guaranty a stable IIR as long as coefficients are not changing in real time, especially in the audio domain which I come from.
I am curious to know what attendees think of the speech to text feature that Chris used for his presentation. Do you think this is helpful and something other speakers should consider for future conferences?
Even though these automatic subtitles mess up DSP terminology, they helped me few times during the video.
Yes, but I quite liked the alchemist angle to designing philtres :-)
I normally prefer subtitles, but I found these automated ones a bit distracting due to delay / inaccuracy.
Very helpful :)
Even late at night (11pm) your talk kept me engaged
A great resource for anyone new to DSP.
Thanks Chris,
John
Very insightful presentation. I come from a digital communications background where we use software-defined radios with an already-decided bit depth and processing power. I wonder if it were possible to reuse the techniques you presented in my domain.
It's very common for bit depth and hardware to be pre-determined. That can be for good reasons - but it is worth thinking it through because cost reductions or other efficiencies may show up. In SDR I think it is often that someone used a hardware platform once and that sort of drifted into becoming a standard - also because that platform was already instrumented with appropriate peripherals, which is something I did not address.
Great talk, nicely done Chris. Love all the "figures of merit" for estimation purposes. Will slides be available for download?
Check on the left-hand side, under 'Slides'.
Yes they should be on line now
Chris , it would be nice if the slides had talking points on them as well as just the pictures
I can provide that to Stephane - will try to do so early next week
Actually, I see I already did - the downloadable slide set is interspersed with text points.
For more depth and detail, watch Taylor and Francis... :-)
Excellent talk, Chris.
Every DSP beginner should watch this one !
Many thanks
Thanks very much
Cool thing.
I will definitely need to go thru this session once again, with slides on the second screen.
Me too :-)
Thank you all for watching...
Regarding implementation: modern processors have come a long way since the days of the 8051! Using Arm's CMSIS-DSP library and a modern C complier, the challenge of efficient implementation on a Cortex-M device has been greatly simplified. These modern mcus are low power and DSP ready. Even the new Cortex-M55 can apparently do ML! This means that we can focus on the application, and not have to break our heads with an effcient implementation.
That's true sometimes - but I am usually looking at devices to cost a few dollars at most, and often to run on battery for weeks: so there the cost efficiencies do come into play very much. I agree that when cost or power consumption or size are not critical considerations then we can relax. Also on fast systems - ultrasound or radaio at 250 MHz sample rate for example - computation speed again becomes significant.
Correction: it's not the state variables so much as the accumulated MAC for each sample...
Strictly speaking, 16 bit arithmetic - as we will see later when I look at DSP processors with extended arithmetic precision
I set the coefficients to have a bit depth similar to the sample
The sample bit depth is determined by the input SNR...
I also used to find it quite cool to leap out at students with a brief requirement and ask them to guesstimate things like filter bandwidth, processing gain, on the spot - a bit like times tables or mental arithmetic that you ought to be able to do in your head
Agreed! The good old sanity check is a must that many overlook.
FYI: At the top right of a post in the forum, you'll see a curved arrow. This is a reply button to be used when replying to someone's else question or comment.
cool
Yes, I agree - here I am dealing with quick estimates, for example during a phone call or conversation or meeting: also, these offer a sanity check on design tools.
FYI: At the top right of a post in the forum, you'll see a curved arrow. This is a reply button to be used when replying to someone's else question or comment.
Certainly some useful tips, Chris! I agree that translating the real world specification to a DSP specification is a must. However, there are several design graphical tools available that can speed up the whole process, as my experience has been that the filter specifications need to be fine-tuned to the dataset. In many cases it's not always apparent at the beginning as to what design prototype (IIRs) you need.
An even simpler estimate is to use the fact that a filter's duration is about the inverse of its bandwidth: T = 1/ENBW - so at the sample rate Fs we can estimate how many coefficients from the sample interval dt=1/Fs: thus N = T/dt = Fs/ENBW
Here, for simplicity, I am using minimal estimates to avoid having to always refer to headroom - in realioty we would bear in mind headroom at all stages
Here, I am assuming the 3 mV noise cited earlier.
For people who will be watching 'on-demand', I'd suggest adding in your comment a reference to the time in the video where your comment applies. For example: @21:54, ...
As with the DFT, a filter collects noise only in its band ENBW so processing gain for a filter is ENBW/BW
In this specification, I omitted the noise....
Don't forget to join Chris for a live Q&A about this talk at 8:30 EST. The link to the zoom meeting can be found in the left column on this page: https://www.dsponlineconference.com/session/Live_Discussion_Pragmatic_Methods_to_Decide_Filter_Requirements
Hello again, Chris,
Okay, this question is really late, I know, and I understand you may not even see this question at this point! But as I watched again for review, I wondered if there was a particular reason for choosing 7mV RMS in your example of measuring sound pressure. I am guessing you just decided to start with a pressure of 1 Pa, which was within the range of [0.1 Pa, 20 Pa] in your simplified statement of client requirements. That would yield 10 mV from the microphone (10 mV/Pa), which in turn would yield 7 mV RMS. Is that correct, and is there a particular reason you did not choose the ends of the ranges (perhaps to limit the time of lecture?)