Finally, we thank the Springer-Verlag for the smooth cooperation and expertise that led to a rapid realization of the new edition. On the occasion of the new edition, the text has been enlarged with several new sections.
Representation of Numbers
A floating point representation is normalized if the first digit (bit) of the mantissa is different from 0 (0). The significant digits (bits) of a number are the digits of the mantissas that do not count leading zeros.
Roundoff Errors and Floating-Point Arithmetic
In the following, we will only consider normalized floating-point representations and the corresponding floating-point arithmetic. It should be noted that floating-point operations do not satisfy the known laws for arithmetic operations.
Error Propagation
The amplification factors (a+b)/(a+b+c) respectively 1 measure the effect of the rounding errors ε1, ε2 on the error εy of the result. It is therefore the size of the Jacobian matrix Dψ(i) of the remaining map ψ(i) that is critical to the effect of the intermediate rounding error sαi or Ei on the final result.
Examples
Comparison of (1.4.2) and (1.4.3) shows that for bigkand|kx| ≈1 the influence of the rounding error εc is considerably larger than the inherent errors, while the rounding errorε is harmless. The algorithm is then numerically stable, at least as far as the influence of the rounding error is concerned.
Interval Arithmetic; Statistical Roundoff Estimation
The first of the above formulas follows from the linearity of the expected value operator. Assuming that the distribution of the final error is normal, the actual relative error of the final result is bounded with probability 0.9 by 2 εr.
Interpolation by Polynomials
- Theoretical Foundation: The Interpolation Formula of Lagrange
 - Neville’s Algorithm
 - Newton’s Interpolation Formula: Divided Differences
 - The Error in Polynomial Interpolation
 - Hermite-Interpolation
 
The coecients of the above expansion are found in the upper descending diagonal of the divided-difference scheme (2.1.3.7). Example 3. We illustrate the calculation of the generalized distributed differences with the data of Example 2 (m= 1,n0 = 2,n1 = 3).
Interpolation by Rational Functions
- General Properties of Rational Interpolation
 - Inverse and Reciprocal Differences. Thiele’s Continued Fraction
 - Algorithms of the Neville Type
 - Comparing Rational and Polynomial Interpolation
 
To examine this situation more closely, we must distinguish between different representations of the same rational function Φµ,ν, which arise from each other by canceling or introducing a common polynomial factor in the numerator and denominator. Note that the converse of the above theorem does not hold: a rational expression Φ1 can well solve Sµ, while some equivalent rational expression Φ2 cannot. Not only is Aµ,ν solvable in this case, but also all problems Aκ,λofκ+λ+ 1 of the original support points where κ+λ≤µ+ν.
Most of the following discussion will be about recursive procedures for solving rational Aµ,ν interpolation problems. Because of the availability of this choice, the recursion methods for rational interpolation are more different than those for polynomial interpolation. Now assume that pµ−1,ν−1s+1 = 0 and multiply the numerator and denominator of the above fraction by.
Trigonometric Interpolation
- Basic Facts
 - Fast Fourier Transforms
 - The Algorithms of Goertzel and Reinsch
 - The Calculation of Fourier Coefficients. Attenuation Factors
 
The Cooley-Tukey approach is best understood in terms of the interpolation problem described in the previous section 2.3.1. In the following pseudo-algol formulation of the classical Cooley-Tukey method, we assume that the array ˜β[ ] is initialized by setting This algorithm is well behaved in terms of error propagation∆λ= ελλ,|ελ| ≤eps, he's worried.
The algorithm therefore behaves well in terms of the propagation of the error ∆λ. It is therefore useful to distinguish linear approximation methodsP. The vector space IF has a finite dimension N, a basis is formed by the series. As a by-product of the above proof, we obtained an explicit formula (2.3.4.8) for the attenuation factors.
Interpolation by Spline Functions
- Theoretical Foundations
 - Determining Interpolating Cubic Spline Functions
 - Convergence Properties of Cubic Spline Functions
 - B-Splines
 - Multi-Resolution Methods and B-Splines
 
We will confirm that each of these three sets of conditions by itself ensures uniqueness of the interpolating spline function S∆(Y;.). The value f therefore gives us an approximate measure of the total curvature of the function f in the interval [a, b]. We will first show that the moments (2.4.2.1) of the interpolating spline function converge to the second derivative of the given function.
If S∆ is the spline function which interpolates the values of the function f at the nodes x0,· · ·, xn ∈∆ and satisfies S∆(x) =f(x)for x=a, b, then there exists constantsck≤2 , which does not depend on the partition∆, so that forx∈[a, b],. Therefore, the function Bi,r,t coincides with a polynomial of degree at most r−1 on each of the open intervals in the following set. The dual function ˜Φ can thus be used to also calculate the coefficientscj,l of the series f =".
According to these equations, the sequence (ak)k≥1 is a solution of the homogeneous linear difference equation. In the context of multi-resolution methods, it is essential that the functions Φj,k, k ∈ ZZ form a Riesz basis of the spaces Vj.
The Integration Formulas of Newton and Cotes
For larger n, some values of σi become negative and the corresponding formulas are not suitable for numerical purposes, since deletions often appear when calculating the sum (3.1.3). Additional integration rules can be found by Hermitian interpolation [see Section 2.1.5] integrandf with a polynomial P ∈Πn of degree n or less. The full integral is then approximated by the sum of the approximations of the subintegrals.
As the step length decreases (increasing n) the approximation error approaches zero as fast ash2, so we have a method of order 2. Comparing this error with that of the trapezoidal sum, we notice that the order of the method is improved by 2. with a minimum of additional effort, namely the computation of off(a) and def(b). By replacing f(a), f(b) by the difference coefficients with a sufficiently high order approximation error, we obtain simple modifications [“end corrections”: see Henrici (1964)] of the trapezoidal sum which do not involve driving , but still lead to methods of orders higher than 2.
Peano’s Error Representation
3.1.6) for M(h) can be extended to an error formula for the compound rule U(h) in the same way as before. Before proving the theorem, let's discuss its application in the case of Simpson's rule. To convert this representation of R(f) to the desired one, we need to swap the Rx operator with the integration.
The latter integral is differentiable as a function of x because the integrand is bound together in xandt; therefore. Note that Pean's integral representation of error is not restricted to operators of the form (3.2.1). In general, the Newton-Cotes power formulas integrate without error the polynomials P ∈Πnif odd and P ∈Πn+1if even [see Exercise 2].
The Euler–Maclaurin Summation Formula
Integration by Extrapolation
The expansion then acts like a polynomial inh2 which yields the valueτ0 of the integral forh= 0. Some of the rules, but not all, appear to be of the Newton-Cotes type [see Section 3.1]. For the sequence (3.4.5a), half of the function values needed for the computation of the trapezoidal sumT(hi+1) have previously been encountered in the computation of T(hi+), and their recalculation can be avoided.
Analoggol procedures which compute the tableau (3.4.4) for givenm and the interval [a, b] using the Romberg sequence (3.4.5a) are given below. To save memory space, the tableau is built up by adding upward diagonals to the bottom of the tableau. Such an approximation s can be obtained at the same time as calculating one of the trapezoidal sums T(hi).
About Extrapolation Methods
The coefficients τi are independent of h, the function αm+1(h) is bounded for h→0, and τ0 = limh→0T(h) is the exact solution of the current problem. We will see later that the central difference quotient is a better approximation to base an extrapolation method on as far as convergence is concerned because its asymptotic expansion contains only even powers of the step length. For the following discussion of the discretization errors, we will assume that Tik(h) are polynomials with exponents of the form γk=γ k.
Note that the increase in order of convergence from column to column that can be achieved by extrapolation methods is equal to γ : γ= 2 is twice as good as γ = 1. This explains the advantage of discretization methods whose corresponding asymptotic expansions contain only even powers h, e.g. the trapezoidal sum asymptotic expansion (3.4.1) or the central difference quotient discussed in this section. Formula (3.5.10) furthermore shows that the sign of the error remains constant for a fixed k < and large enough if τk+1 = 0.
Gaussian Integration Methods
The Newton-Cotes formulas [see Section 3.1] are of this form, but the ab-slices xi were required to form a uniform partition of the interval [a, b]. To establish these results and determine the exact form of the Gaussian integration rules, we need some basic facts about orthogonal polynomials. The condition of the theorem is known as the Haar condition. Any sequence of functions p0, p1, .
Note that the abscissasxi must be mutually distinct, since otherwise we can formulate the same integration rule using only n−1 of the abscissasxi, which contradicts (3.6.12c). We will examine this problem under the assumption that the coefficients δi,γi of the recursion (3.6.5) are given. If the QR method is used for determining the eigenvalues of Jn, then the calculation of the first components v(i)1 of the eigenvectors v(i) is readily included in that algorithm: the calculation of the abscissa xi and the weight wi can be done simultaneously [Golub and Welsch (1969)].
Integrals with Singularities
The difficulty lies in the choice of ε: if ε is chosen too small, then the proximity of the singularity at x = 0 causes the rate of convergence to deteriorate when we calculate the remaining integral. If the new integrand is singular at 0, one of the above approaches can be tried. By construction, the next Newton–Cote formula gives the exact value of the integral for integrands which are polynomials of degree at most.
Note that Cm(θ)→ ∞asθ→1, so the stability of the extrapolation method deteriorates strongly as b approaches 1. Consider the Legendre polynomialspj(x) in (3.6.18). a) Show that the leading coefficient of pj(x) has the value 1. Hint: Integration by parts, note that d2i+1. What is the form of the tridiagonal matrix (3.6.19) in this case. In the Chebyshev case, the weights are equal to the generaln).
Gaussian Elimination. The Triangular Decomposition of a Matrix