Home >
How to Design Nonlinear Approximations for DSP
Christopher Hansen - Available in 1 day, 9 hours and 15 minutes (2025-11-05 06:00 EST) - DSP Online Conference 2025 - Duration: 45:41
                            Nonlinear functions, such as arctangent, logarithm, and square root are commonly used in Digital Signal Processing. In practice, a textbook approximation algorithm is often used to compute these functions. These approximations are typically of mysterious origin and optimized for a certain application or implementation. Consequently, they may not be ideal for the application at hand. This talk describes a method for designing approximations using Chebfun (www.chebfun.org), an open-source software system for numerical computation with functions. With Chebfun, it is possible to quickly determine polynomial and rational approximations for any function with as many interpolation points as needed. This talk will cover a few basic topics in approximation theory and then work through several practical examples that can be directly employed in fixed point and floating point DSP applications.
This guide was created with the help of AI, based on the presentation's transcript. Its goal is to give you useful context and background so you can get the most out of the session.
What this presentation is about and why it matters
This talk explains practical ways to build accurate, efficient approximations for common nonlinear functions that arise in DSP — examples include arctangent (phase from I/Q), logarithms (dB conversions), and square roots (RMS values). Instead of blindly copying textbook formulas or standard library calls, the speaker shows how to design approximations tailored to your accuracy, speed, and resource constraints. That matters because in many real systems (embedded processors, FPGAs, ASICs, or high-throughput software) built-in math routines are too slow, too large, or not precise in the ranges you care about. Good approximations can reduce latency, save silicon, avoid expensive divisions, and make fixed‑point implementations feasible while meeting application requirements.
Who will benefit the most from this presentation
- DSP engineers implementing signal chains on embedded processors, FPGAs, or ASICs who need fast math kernels.
 - Software engineers optimizing math-heavy code for throughput or power.
 - Students and researchers who want a practical bridge between approximation theory and applied DSP.
 - Anyone porting floating‑point algorithms to fixed‑point or constrained hardware and needing to control error vs. cost tradeoffs.
 
What you need to know
The talk is practical but assumes familiarity with some basic math and DSP ideas. Here are the key concepts to review so you get the most from the examples:
- Polynomials: A polynomial approximation is often written as $p(x)=\sum_{k=0}^n a_k x^k$. Polynomials are attractive because they require only multiplies and adds.
 - Taylor series: A local expansion around a point $a$ of the form $\sum f^{(k)}(a)/k!\,(x-a)^k$. Useful near the expansion point but not automatically good across an interval.
 - Uniform (minimax) error: In many DSP uses we care about worst-case (max) error over an interval, not just RMS error. The presentation emphasizes controlling the maximum error.
 - Chebyshev points: Nonuniform sample points (clustering near interval ends) that produce stable, near‑optimal polynomial interpolants over an interval. They avoid large oscillations that occur with equally spaced samples.
 - Lagrange interpolation: A constructive form that produces the unique polynomial of degree ≤ n passing through n+1 samples. Useful to understand how interpolation maps samples to coefficients.
 - Rational approximations: Functions approximated as a ratio of two polynomials $r(x)=p(x)/q(x)$. Often reduce required degree vs. a single polynomial and can be attractive if your implementation already performs division.
 - Range reduction and piecewise methods: Breaking the input interval into subintervals (or scaling inputs) lets you use low‑order polynomials locally, dramatically reducing coefficient count and computation.
 - Fixed-point vs floating-point concerns: Coefficient magnitudes, dynamic range, and numerical stability matter for implementation. The talk shows how to obtain coefficients and then refine them for fixed‑point deployment.
 - Chebfun: An open-source MATLAB-based tool used in the talk to automate high-quality polynomial and rational fits across intervals — it handles the heavy lifting so you can iterate quickly on design choices.
 
Glossary
- Polynomial approximation: Approximating a function by a polynomial $p(x)$ to reduce computation to adds and multiplies.
 - Rational approximation: Approximating a function by $p(x)/q(x)$, often using lower-order polynomials in numerator and denominator.
 - Chebyshev points: Cosine-spaced points on an interval used for stable interpolation and lower maximum error.
 - Lagrange interpolant: A polynomial built to pass exactly through a set of sampled points using basis polynomials.
 - Taylor series: Local derivative-based polynomial expansion around a point; accurate near the center but can be poor over wide intervals.
 - Range reduction: Transforming or partitioning the input so approximations operate on a smaller, better-conditioned domain.
 - Minimax / uniform error: The maximum absolute error over an interval; a common design metric for worst-case guarantees.
 - Fixed-point implementation: Numeric implementation with limited fractional precision and dynamic range — demands careful coefficient and scaling choices.
 - atan2: Two-argument arctangent used to compute angle from I/Q; commonly benefits from rational or piecewise approximations.
 - dB conversion: Computing $10\log_{10}(x)$ over large dynamic ranges; often requires piecewise or rational fits to get small errors with low cost.
 
Preparation tip: if you can, skim a short introduction to Chebfun or have MATLAB available to try a toy fit (e.g., approximate $\sin(x)$ on [-\pi,\pi] with Chebyshev points). That will make the speaker's examples and automation scripts easier to follow.

