Adaptive Simpson's method

From Wikipedia, the free encyclopedia
Jump to navigation Jump to search

Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962.[1] It is probably the first recursive adaptive algorithm for numerical integration to appear in print,[2] although more modern adaptive methods based on Gauss–Kronrod quadrature and Clenshaw–Curtis quadrature are now generally preferred. Adaptive Simpson's method uses an estimate of the error we get from calculating a definite integral using Simpson's rule. If the error exceeds a user-specified tolerance, the algorithm calls for subdividing the interval of integration in two and applying adaptive Simpson's method to each subinterval in a recursive manner. The technique is usually much more efficient than composite Simpson's rule since it uses fewer function evaluations in places where the function is well-approximated by a cubic function.

Simpson's rule is an interpolatory quadrature rule which is exact when the integrand is a polynomial of degree three or lower. Using Richardson extrapolation, the more accurate Simpson estimate S(a,m)+S(m,b) for six function values is combined with the less accurate estimate S(a,b) for three function values by applying the correction [S(a,m)+S(m,b)S(a,b)]/15. So, the obtained estimate is exact for polynomials of degree five or less.

Mathematical procedure

[edit | edit source]

Defining terms

[edit | edit source]

A criterion for determining when to stop subdividing an interval, suggested by J.N. Lyness,[3] is |S(a,m)+S(m,b)S(a,b)|<15ε

where [a,b] is an interval with midpoint m=a+b2, while S(a,b), S(a,m), and S(m,b) given by Simpson's rule are the estimates of abf(x)dx, amf(x)dx, and mbf(x)dx respectively, and ε is the desired maximum error tolerance for the interval.

Note, εi+1=εi/2.

Procedural steps

[edit | edit source]

To perform adaptive Simpson's method, do the following: if |S(a,m)+S(m,b)S(a,b)|<15εi, add S(a,m) and S(m,b) to the sum of Simpson's rules which are used to approximate the integral, otherwise, perform the same operation with |S(a,ma2)+S(ma2,m)S(a,m)|<15εi+1 and |S(m,bm2)+S(bm2,b)S(m,b)|<15εi+1 instead of |S(a,m)+S(m,b)S(a,b)|<15εi.

Numerical consideration

[edit | edit source]

Some inputs will fail to converge in adaptive Simpson's method quickly, resulting in the tolerance underflowing and producing an infinite loop. Simple methods of guarding against this problem include adding a depth limitation (like in the C sample and in McKeeman), verifying that ε/2 ≠ ε in floating-point arithmetics, or both (like Kuncir). The interval size may also approach the local machine epsilon, giving a = b.

Lyness's 1969 paper includes a "Modification 4" that addresses this problem in a more concrete way:[3]: 490–2 

  • Let the initial interval be [A, B]. Let the original tolerance be ε0.
  • For each subinterval [a, b], define D(a, b), the error estimate, as 12ab[S(2)(a,b)S(a,b)]=14[f(a)4f(lm)+6f(m)4f(rm)+f(b)]. Define E = 180 ε0 / (B - A). The original termination criteria would then become DE.
  • If the D(a, m) ≥ D(a, b), either the round-off level have been reached or a zero for f(4) is found in the interval. A change in the tolerance ε0 to ε′0 is necessary.
    • The recursive routines now need to return a D level for the current interval. A routine-static variable E' = 180 ε'0 / (B - A) is defined and initialized to E.
    • (Modification 4 i, ii) If further recursion is used on an interval:
      1. If round-off appears to have been reached, change the E' to D(a, m).[a]
      2. Otherwise, adjust E' to max(E, D(a, m)).
    • Some control of the adjustments is necessary. Significant increases and minor decreases of the tolerances should be inhibited.
  • To calculate the effective ε′0 over the entire interval:
    • Log each xi at which the E' is changed into an array of (xi, εi' ) pairs. The first entry should be (a, ε′0).
    • The actual εeff is the arithmetic mean of all ε′0, weighted by the width of the intervals.
  • If the current E' for an interval is higher than E, then the fifth-order acceleration/correction would not apply:[b]
    • The "15" factor in the termination criteria is disabled.
    • The correction term should not be used.

The epsilon-raising maneuver allows the routine to be used in a "best effort" mode: given a zero initial tolerance, the routine will try to get the most precise answer and return an actual error level.[3]: 492 

Sample code implementations

[edit | edit source]

A common implementation technique shown below is passing down f(a), f(b), f(m) along with the interval [a, b]. These values, used for evaluating S(a, b) at the parent level, will again be used for the subintervals. Doing so cuts down the cost of each recursive call from 6 to 2 evaluations of the input function. The size of the stack space used stays linear to the layer of recursions.

Python

[edit | edit source]

Here is an implementation of adaptive Simpson's method in Python.

from __future__ import division # python 2 compat
# "structured" adaptive version, translated from Racket
def _quad_simpsons_mem(f, a, fa, b, fb):
    """Evaluates the Simpson's Rule, also returning m and f(m) to reuse"""
    m = (a + b) / 2
    fm = f(m)
    return (m, fm, abs(b - a) / 6 * (fa + 4 * fm + fb))

def _quad_asr(f, a, fa, b, fb, eps, whole, m, fm):
    """
    Efficient recursive implementation of adaptive Simpson's rule.
    Function values at the start, middle, end of the intervals are retained.
    """
    lm, flm, left  = _quad_simpsons_mem(f, a, fa, m, fm)
    rm, frm, right = _quad_simpsons_mem(f, m, fm, b, fb)
    delta = left + right - whole
    if abs(delta) <= 15 * eps:
        return left + right + delta / 15
    return _quad_asr(f, a, fa, m, fm, eps/2, left , lm, flm) +\
           _quad_asr(f, m, fm, b, fb, eps/2, right, rm, frm)

def quad_asr(f, a, b, eps):
    """Integrate f from a to b using Adaptive Simpson's Rule with max error of eps."""
    fa, fb = f(a), f(b)
    m, fm, whole = _quad_simpsons_mem(f, a, fa, b, fb)
    return _quad_asr(f, a, fa, b, fb, eps, whole, m, fm)

from math import sin
print(quad_asr(sin, 0, 1, 1e-09))

Here is an implementation of the adaptive Simpson's method in C99 that avoids redundant evaluations of f and quadrature computations. It includes all three "simple" defenses against numerical problems.

#include <math.h>  // include file for fabs and sin
#include <stdio.h> // include file for printf and perror
#include <errno.h>

/** Adaptive Simpson's Rule, Recursive Core */
float adaptiveSimpsonsAux(float (*f)(float), float a, float b, float eps,
                          float whole, float fa, float fb, float fm, int rec) {
    float m   = (a + b)/2,  h   = (b - a)/2;
    float lm  = (a + m)/2,  rm  = (m + b)/2;
    // serious numerical trouble: it won't converge
    if ((eps/2 == eps) || (a == lm)) { errno = EDOM; return whole; }
    float flm = f(lm),      frm = f(rm);
    float left  = (h/6) * (fa + 4*flm + fm);
    float right = (h/6) * (fm + 4*frm + fb);
    float delta = left + right - whole;

    if (rec <= 0 && errno != EDOM) errno = ERANGE;  // depth limit too shallow
    // Lyness 1969 + Richardson extrapolation; see article
    if (rec <= 0 || fabs(delta) <= 15*eps)
        return left + right + (delta)/15;
    return adaptiveSimpsonsAux(f, a, m, eps/2, left,  fa, fm, flm, rec-1) +
           adaptiveSimpsonsAux(f, m, b, eps/2, right, fm, fb, frm, rec-1);
}

/** Adaptive Simpson's Rule Wrapper
 *  (fills in cached function evaluations) */
float adaptiveSimpsons(float (*f)(float),     // function ptr to integrate
                       float a, float b,      // interval [a,b]
                       float epsilon,         // error tolerance
                       int maxRecDepth) {     // recursion cap
    errno = 0;
    float h = b - a;
    if (h == 0) return 0;
    float fa = f(a), fb = f(b), fm = f((a + b)/2);
    float S = (h/6)*(fa + 4*fm + fb);
    return adaptiveSimpsonsAux(f, a, b, epsilon, S, fa, fb, fm, maxRecDepth);
}

/** usage example */
#include <stdlib.h> // for the hostile example (rand function)
static int callcnt = 0;
static float sinfc(float x) { callcnt++; return sinf(x); } 
static float frand48c(float x) { callcnt++; return drand48(); } 
int main() {
    // Let I be the integral of sin(x) from 0 to 2
    float I = adaptiveSimpsons(sinfc, 0, 2, 1e-5, 3);
    printf("integrate(sinf, 0, 2) = %lf\n", I);   // print the result
    perror("adaptiveSimpsons");                   // Was it successful? (depth=1 is too shallow)
    printf("(%d evaluations)\n", callcnt);

    callcnt = 0; srand48(0);
    I = adaptiveSimpsons(frand48c, 0, 0.25, 1e-5, 25); // a random function
    printf("integrate(frand48, 0, 0.25) = %lf\n", I);
    perror("adaptiveSimpsons");                        // won't converge
    printf("(%d evaluations)\n", callcnt);
    return 0;
}

This implementation has been incorporated into a C++ ray tracer intended for X-Ray Laser simulation at Oak Ridge National Laboratory,[4] among other projects. The ORNL version has been enhanced with a call counter, templates for different datatypes, and wrappers for integrating over multiple dimensions.[4]

Racket

[edit | edit source]

Here is an implementation of the adaptive Simpson method in Racket with a behavioral software contract. The exported function computes the indeterminate integral for some given function f.[5]

;; -----------------------------------------------------------------------------
;; interface, with contract 
(provide/contract
 [adaptive-simpson 
  (->i ((f (-> real? real?)) (L real?) (R  (L) (and/c real? (>/c L))))
       (#:epsilon (ε real?))
       (r real?))])

;; -----------------------------------------------------------------------------
;; implementation 
(define (adaptive-simpson f L R #:epsilon  .000000001])
  (define f@L (f L))
  (define f@R (f R))
  (define-values (M f@M whole) (simpson-1call-to-f f L f@L R f@R))
  (asr f L f@L R f@R ε whole M f@M))

;; the "efficient" implementation
(define (asr f L f@L R f@R ε whole M f@M)
  (define-values (leftM  f@leftM  left*)  (simpson-1call-to-f f L f@L M f@M))
  (define-values (rightM f@rightM right*) (simpson-1call-to-f f M f@M R f@R))
  (define delta* (- (+ left* right*) whole))
  (cond
    [(<= (abs delta*) (* 15 ε)) (+ left* right* (/ delta* 15))]
    [else (define epsilon1 (/ ε 2))
          (+ (asr f L f@L M f@M epsilon1 left*  leftM  f@leftM) 
             (asr f M f@M R f@R epsilon1 right* rightM f@rightM))]))

;; evaluate half an interval (1 func eval)
(define (simpson-1call-to-f f L f@L R f@R)
  (define M (mid L R))
  (define f@M (f M))
  (values M f@M (* (/ (abs (- R L)) 6) (+ f@L (* 4 f@M) f@R))))

(define (mid L R) (/ (+ L R) 2.))
[edit | edit source]
  • Henriksson (1961) is a non-recursive variant of Simpson's Rule. It "adapts" by integrating from left to right and adjusting the interval width as needed.[2]
  • Kuncir's Algorithm 103 (1962) is the original recursive, bisecting, adaptive integrator. Algorithm 103 consists of a larger routine with a nested subroutine (loop AA), made recursive by the use of the goto statement. It guards against the underflowing of interval widths (loop BB), and aborts as soon as the user-specified eps is exceeded. The termination criteria is |S(2)(a,b)S(a,b)|<2nϵ, where n is the current level of recursion and S(2) is the more accurate estimate.[1]
  • McKeeman's Algorithm 145 (1962) is a similarly recursive integrator that splits the interval into three instead of two parts. The recursion is written in a more familiar manner.[6] The 1962 algorithm, found to be over-cautious, uses |S(3)(a,b)S(a,b)|<3nϵ for termination, so a 1963 improvement uses 3nϵ instead.[3]: 485, 487 
  • Lyness (1969) is almost the current integrator. Created as a set of four modifications of McKeeman 1962, it replaces trisection with bisection to lower computational costs (Modifications 1+2, coinciding with the Kuncir integrator) and improves McKeeman's 1962/63 error estimates to the fifth order (Modification 3), in a way related to Boole's rule and Romberg's method.[3]: 489  Modification 4, not implemented here, contains provisions for roundoff error that allows for raising the ε to the minimum allowed by current precision and returning the new error.[3]

Notes

[edit | edit source]
  1. ^ The original 4i only mentions raising E'. However, later text mentioned that it can be lowered in large steps.
  2. ^ This likely also applys to the tolerance/interval width underflows in the simplistic case.

Bibliography

[edit | edit source]
  1. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  2. ^ a b For an earlier, non-recursive adaptive integrator more reminiscent of ODE solvers, see Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  3. ^ a b c d e f Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  4. ^ a b Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  5. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
  6. ^ Lua error in Module:Citation/CS1/Configuration at line 2172: attempt to index field '?' (a nil value).
[edit | edit source]