Home > On-Demand Archives > Talks >
How to Design Nonlinear Approximations for DSP
Christopher Hansen - Watch Now - DSP Online Conference 2025 - Duration: 45:41
Nonlinear functions, such as arctangent, logarithm, and square root are commonly used in Digital Signal Processing. In practice, a textbook approximation algorithm is often used to compute these functions. These approximations are typically of mysterious origin and optimized for a certain application or implementation. Consequently, they may not be ideal for the application at hand. This talk describes a method for designing approximations using Chebfun (www.chebfun.org), an open-source software system for numerical computation with functions. With Chebfun, it is possible to quickly determine polynomial and rational approximations for any function with as many interpolation points as needed. This talk will cover a few basic topics in approximation theory and then work through several practical examples that can be directly employed in fixed point and floating point DSP applications.
This guide was created with the help of AI, based on the presentation's transcript. Its goal is to give you useful context and background so you can get the most out of the session.
What this presentation is about and why it matters
This talk explains practical ways to build accurate, efficient approximations for common nonlinear functions that arise in DSP — examples include arctangent (phase from I/Q), logarithms (dB conversions), and square roots (RMS values). Instead of blindly copying textbook formulas or standard library calls, the speaker shows how to design approximations tailored to your accuracy, speed, and resource constraints. That matters because in many real systems (embedded processors, FPGAs, ASICs, or high-throughput software) built-in math routines are too slow, too large, or not precise in the ranges you care about. Good approximations can reduce latency, save silicon, avoid expensive divisions, and make fixed‑point implementations feasible while meeting application requirements.
Who will benefit the most from this presentation
- DSP engineers implementing signal chains on embedded processors, FPGAs, or ASICs who need fast math kernels.
- Software engineers optimizing math-heavy code for throughput or power.
- Students and researchers who want a practical bridge between approximation theory and applied DSP.
- Anyone porting floating‑point algorithms to fixed‑point or constrained hardware and needing to control error vs. cost tradeoffs.
What you need to know
The talk is practical but assumes familiarity with some basic math and DSP ideas. Here are the key concepts to review so you get the most from the examples:
- Polynomials: A polynomial approximation is often written as $p(x)=\sum_{k=0}^n a_k x^k$. Polynomials are attractive because they require only multiplies and adds.
- Taylor series: A local expansion around a point $a$ of the form $\sum f^{(k)}(a)/k!\,(x-a)^k$. Useful near the expansion point but not automatically good across an interval.
- Uniform (minimax) error: In many DSP uses we care about worst-case (max) error over an interval, not just RMS error. The presentation emphasizes controlling the maximum error.
- Chebyshev points: Nonuniform sample points (clustering near interval ends) that produce stable, near‑optimal polynomial interpolants over an interval. They avoid large oscillations that occur with equally spaced samples.
- Lagrange interpolation: A constructive form that produces the unique polynomial of degree ≤ n passing through n+1 samples. Useful to understand how interpolation maps samples to coefficients.
- Rational approximations: Functions approximated as a ratio of two polynomials $r(x)=p(x)/q(x)$. Often reduce required degree vs. a single polynomial and can be attractive if your implementation already performs division.
- Range reduction and piecewise methods: Breaking the input interval into subintervals (or scaling inputs) lets you use low‑order polynomials locally, dramatically reducing coefficient count and computation.
- Fixed-point vs floating-point concerns: Coefficient magnitudes, dynamic range, and numerical stability matter for implementation. The talk shows how to obtain coefficients and then refine them for fixed‑point deployment.
- Chebfun: An open-source MATLAB-based tool used in the talk to automate high-quality polynomial and rational fits across intervals — it handles the heavy lifting so you can iterate quickly on design choices.
Glossary
- Polynomial approximation: Approximating a function by a polynomial $p(x)$ to reduce computation to adds and multiplies.
- Rational approximation: Approximating a function by $p(x)/q(x)$, often using lower-order polynomials in numerator and denominator.
- Chebyshev points: Cosine-spaced points on an interval used for stable interpolation and lower maximum error.
- Lagrange interpolant: A polynomial built to pass exactly through a set of sampled points using basis polynomials.
- Taylor series: Local derivative-based polynomial expansion around a point; accurate near the center but can be poor over wide intervals.
- Range reduction: Transforming or partitioning the input so approximations operate on a smaller, better-conditioned domain.
- Minimax / uniform error: The maximum absolute error over an interval; a common design metric for worst-case guarantees.
- Fixed-point implementation: Numeric implementation with limited fractional precision and dynamic range — demands careful coefficient and scaling choices.
- atan2: Two-argument arctangent used to compute angle from I/Q; commonly benefits from rational or piecewise approximations.
- dB conversion: Computing $10\log_{10}(x)$ over large dynamic ranges; often requires piecewise or rational fits to get small errors with low cost.
Preparation tip: if you can, skim a short introduction to Chebfun or have MATLAB available to try a toy fit (e.g., approximate $\sin(x)$ on [-\pi,\pi] with Chebyshev points). That will make the speaker's examples and automation scripts easier to follow.
Thanks for the questions.
For 1) Sure. Scaling basically works the same way as interval splitting, but instead of generating, say, 5 sets of coefficients for 5 different intervals, instead generate one set of coefficients for one of the intervals. Then scale the input to match the interval and adjust the output to match. For example, for the 10*log10 approximation it is possible just to use the polynomial for the [10^-10 , 10^8] interval. If the value of 10^-5 comes in, scale it by 10^-4 to 10^-9. Then apply to the polynomial and adjust the output by +40.
2) For the mathematical libraries, I believe they use polynomial and rational polynomials, as well as a number of tricks. The source code for the gnu c library is publicly available (See https://sourceware.org/glibc/sources.html). I have not read it however.
Sure - I can use the arctan(Q/I) example. The arctan1 and arctan2 approximations are both of the form ax/(b+cx^2). Substituting x = Q/I gives:
(aQ/I)/(b+cQ^2/I^2)
the multiplying numerator and denominator by I^2 gives:
aIQ/(bI^2 + cQ^2)
In this form arctan1(x) = IQ/(I^2 + 0.2815Q^2) and arctan2(x) = 0.873IQ/*0.886I^2 + 0.2280Q^2)
Chris Hansen
That makes perfect sense. I am a bit embarassed that I missed it. Thank you.
Hello Dr Hansen,
I watched your presentation with joy, and I would like to thank you for it. I have a question regarding rational approximations. On slide 23, you mentioned that rational approximations are especially useful for functions that already have division in them. Can you elaborate on this a little?
Initially, I thought there was going to be a trick that enabled us to do only 1 division for both the function input and the rational polynomials. For instance, for the arctan example: arctan(Q/I) = p(Q)/q(I). But this is not the case.
If you can help me understand the reason behind the usefulness of the rational approximations in cases where the function input already has a division, I would be very happy.
Kind regards

Great presentation Chris! A few questions:
1) Can you elaborate on how the scaling method works for reducing the polynomial order, as opposed to the interval-splitting method? You mentioned it would involve scaling multiplications but at the same time only one set of coefficients (?), rather than several sets for different intervals that have to be muxed. Can you explain more how that method would work?
2) How are these kinds of functions - arctan, sqrt, etc - approximated in software library functions in C, MATLAB, etc? Or in something like a TI-84 calculator? Is it a completely different method or also polynomial-based? I'm sure those methods are slower/more expensive and have accuracy that may be overkill for your application, but I'm curious how they do it.