• Tidak ada hasil yang ditemukan

Interpolation by Polynomials

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 48-68)

Interpolation

2.1 Interpolation by Polynomials

2.1.1 Theoretical Foundation: The Interpolation Formula of Lagrange

In what follows, we denote by lIn the set of all real or complex polynomials P whose degrees do not exceed n:

P(x)

=

ao

+

alx

+ ... +

anxn.

(2.1.1.1) Theorem For n

+

1 arbitrary support points (Xi' f;), i = 0, ... , n, Xi

+-

Xk for i

+-

k,

there exists a unique polynomial P E TIn with

P(X;) = J;, i

=

0, 1, ... , n.

PROOF. Uniqueness: For any two polynomials PI> P2 E TIn with i = 0, 1, ... , n,

the polynomial P := Pi - P 2 E TIn has degree at most n, and it has at least n

+

1 different zeros, namely Xi' i

=

0, ... , n. P must therefore vanish iden- tically, and Pi = P2 •

Existence: We will construct the interpolating polynomial P explicitly with the help of polynomials Li E TIn, i

=

0, ... , n, for which

(2.1.1.2 )

The following Lagrange polynomials satisfy the above conditions:

L-( ).= (x - xo) ... (x - Xi-i)(X - Xi+i ) ... (x - xn) ,X .- (Xi - Xo) ... (Xi - Xi-d(Xi - Xi+d ... (Xi - Xn)

W(X). n

=== ( x - Xi )w'() Xi WIth W(X):===

n

(x - x;).

i=O

(2.1.1.3 )

Note that our proof so far shows that the Lagrange polynomials are uniquely determined by (2.1.1.2).

The solution P of the interpolation problem can now be expressed directly in terms of the polynomials Li , leading to the Lagrange interpolation formula:

(2.1.1.4) P(x) === i=O

I

n J;Li(x) === i=O

I

n J; kti Xi

n --.

n X -- Xk Xk

k=O

D

The above interpolation formula shows that the coefficients of P depend linearly on the support ordinates J;. While theoretically important, Lagran- ge's formula is, in general, not as suitable for actual calculations as some other methods to be described below, particularly for large numbers n of support points. Lagrange's formula may, however, be useful in some situa- tions in which many interpolation problems are to be solved for the same support abscissae Xi' i = 0, ... , n, but different sets of support ordinates J;, i = 0, ... , n.

EXAMPLE. Given for n = 2:

Xi 0 1 3

h 1 3 2 Wanted: P(2), where P E il2 , P(Xi) = h for i = 0, 1,2.

Solution:

_(x-1)(x-3)

Lo(x) = (0 - 1)(0 - 3)' _ (x - O)(x - 3) L1(x) = (1 _ 0)(1 - 3)'

2.1.2 Neville's Algorithm

_ (x - O)(x - 1) L2(x) = (3 _ 0)(3 - 1)'

Instead of solving the interpolation problem all at once, one might consider solving the problem for smaller sets of support points first and then updating these solutions to obtain the solution to the full interpolation problem. This idea will be explored in the following two sections.

For a given set of support points (Xi' 1;), i = 0, 1, ... , n, we denote by

that polynomial in ilk for which

p.. . lOll ... 1k (x. ) Ij

=

Jij' t. j

=

0,1, ... , k.

These polynomials are linked by the following recursion:

(2.1.2.1a)

PROOF. (2.1.2.1a) is trivial. To prove (2.1.2.lb), we denote its right-hand side by R(x), and go on to show that R has the characteristic properties of Pioi1 ... ik. The degree of R is clearly no greater than k. By the definitions of Pio ... ik-1 and Pi1 ... ik'

R(Xio)

=

Pio ... ik-1 (Xio)

=

1;0' R(XiJ = Pi1 ... ik(XiJ =

k,

and

for j = 1, 2, ... , k - 1. Thus R = Pioi1 ... ik' in view of the uniqueness of

polynomial interpolation [Theorem (2.1.1.1)]' 0

Neville's algorithm aims at determining the value of the interpolating polynomial P for a single value of x. It is less suited for determining the interpolating polynomial itself. Algorithms that are more efficient for the

latter task, and also more efficient if values of P are sought for several arguments x simultaneously, will be described in Section 2.1.3.

Based on the recursion (2.1.2.1), Neville's algorithm constructs a symme- tric tableau of the values of some of the partially interpolating polynomials

P ioil ... ik for fixed x:

k=O 1 2 3

Xo 10 = Po(x)

P01 (x)

Xl 11 = P1(x) Podx)

(2.1.2.2) P12(x)--- POl23(X)

X2 12 = P2(x) !P123(X)j

P23(x)-- X3 13 = P3(x)

The first column of the tableau contains the prescribed support ordinates 1;.

Subsequent columns are filled by calculating each entry recursively from its two "neighbors" in the previous column according to (2.1.2.1b). The entry P 123 (x), for instance, is given by

P () _ (x - XdP23(X) - (x - X3)Pn(X) 123 x - . X3 - Xl

EXAMPLE. Determine Po12(2) for the same support points as in section 2.1.1.

k=O 2

Xo = 0 fo = Po(2) = 1

Pol (2) = 5

Xl = 1 fl=P l(2)=3

Pl2(2) =!

X 2 = 3 f2 = P2(2) = 2

P (2) = (2 - 0) . 3 - (2 - 1) . 1 = 5,

01 1 - 0

P (2) = (2 - 1)' 2 - (2 - 3) . 3 = ~

12 3-1 2'

(2 - 0) . 5/2 - (2 - 3) . 5 10

Po12(2)

=

3-0

=3'

We will now discuss slight variants of Neville's algorithm, employing a frequently used abbreviation,

(2.1.2.3 ) 1i+k. k := Pi. i+ 1 •...• i+k'

The tableau (2.1.2.2) becomes

Xo fo

=

Too (2.1.2.4)

The arrows indicate how the additional upward diagonal 7;0, 7;1, ... , 7;i can be constructed if one more support point (Xi' h) is added.

The recursion (2.1.2.1) may be modified for more efficient evaluation:

(2.1.2.5a) 7;0 :=

h

T: .-(x - Xi-k)T;,k-1 - (x - Xi)T;-1,k-1 ik .-

Xi - Xi-k

(2.1.2.5b)

-T: 7;,k-1-7;-1,k-1

- i,k-1

+ ,

X - Xi-k _ 1 1 ::::; k ::::; i, i

=

0, 1, ... , n.

X -Xi

The following ALGOL algorithm is based on this modified recursion:

for i :=

°

step 1 until n do begin t[i]:= f[i];

for j := i - I step - 1 until

°

do

tU]:= tU

+

1]

+

(tU

+

1] - t[j]) x (z - x[i])J(x[i] - xU]) end;

After the inner loop has terminated, tU]

=

7;, i _ j '

° ::::;

j ::::; i. The desired value T,.n

=

Po 1 ... n of the interpolating polynomial can be found in t[0J.

Still another modification of Neville's algorithm serves to improve some- what the accuracy of the interpolated polynomial value. For i

=

0, 1, ... , n, let the quantities Qik' Dik be defined by

QiO := DiO :=

h,

Qik:= 7;k - 7;,k-1 } Dik := 7;k - 7;-1, k-1

The recursion (2.1.2.5) then translates into (2.1.2.6)

-x

Qik:=(Di,k-1 - Qi-1,k-d-'-- Xi-k - Xi

( ) Xi-k - X

D ik := D i,k-1 - Qi-1,k-1 ---=-~­

X i - k - Xi

1 ::::; k ::::; i.

1 ::::; k ::::; i, i

=

0, 1, ... , n.

Starting with QiO := DiO := J;, one calculates Qik' Dik from the above recur- sion. Finally

T"n

:=

f" + L

n Qnk . k=l

If the valuesfo, ...

,f"

are close to each other, the quantities Qik will be small compared to J;. This suggests forming the sum of the "corrections"

Qnl,' .. , Qnn first [contrary to (2.1.2.5)] and then adding it to

f",

thereby avoiding unnecessary roundoff errors.

Note finally that for x

=

0 the recursion (2.1.2.5) takes a particularly simple form

(2.1.2.7a)

(2.1.2.7b) T 1'; k-l - 1';-1 k-l 1';k:= i,k-l

+ ' "

Xi-k _ 1

Xi

1 ~ k ~ i.

-as does its analog (2.1.2.6). These forms are encountered when applying extrapolation methods.

For historical reasons mainly, we mention Aitken's algorithm. It is also based on (2.1.2.1), but uses different intermediate polynomials. Its tableau is of the form

Xo fo

=

Po(X)

Xl fl

=

Pl(x) POl (X)

X2 f2

=

P2(X) P02(X) Po 12 (X)

X3 f3

=

P3(x) P03(X) P013(X) P 0123(X)

X4 f4

=

P4(x) P04(X) P014(X) P0124(X) P01234(X)

The first column again contains the prescribed values J;. Each subsequent entry derives from the previous entry in the same row and the top entry in the previous column according to (2.1.2.1 b).

2.1.3 Newton's Interpolation Formula: Divided Differences

Neville's algorithm is geared towards determining interpolating values rather than polynomials. If the interpolating polynomial itself is needed, or if one wants to find interpolating values for several arguments ~j simultan- eously, then Newton's interpolation formula is to be preferred. Here we

write the interpolating polynomial P E lln' P(x;)

= h,

i

=

0, 1, ... , n, in the form

P(X) == POl ... n(X)

(2.1.3.1 ) == ao + al(X - Xo) + a2(X - Xo)(X - xd + .. .

+

an(x - Xo) ... (x - xn-d.

Note that the evaluation of (2.1.3.1) for x = ~ may be done recursively as indicated by the following expression:

This requires fewer operations than evaluating (2.1.3.1) term by term. It corresponds to the so-called Horner scheme for evaluating polynomials which are given in the usual form, i.e. in terms of powers of x, and it shows that the representation (2.1.3.1) is well suited for evaluation.

It remains to determine the coefficients ai in (2.1.3.1). In principle, they can be calculated successively from

10 = P(xo) = ao,

11 = P(xd = ao +.al(xl - Xo),

12

=

P(X2)

=

ao

+

al(X2 _·Xo)

+

a2(X2 - XO)(X2 - Xl)'

This can be done with n divisions and n(n - 1) multiplications. There is, however, a better way, which requires only n(n

+

1 )/2 divisions and which produces useful intermediate results.

Observe that the two polynomials P ioil ... ik(X) and Pioi I ... ik-I (x) differ by a polynomial of degree k with k zeros X io ' Xii' ... , Xik _I ' since both polyno- mials interpolate the corresponding support points. Therefore there exists a unique coefficient

(2.1.3.2) such that (2.1.3.3 )

k = 0,1, ... , n,

From this and from the identity Pio(x) == ho it follows immediately that (2.1.3.4) Pioil ... ik(X) ==ho

+

hoil(X - x io )

+ ...

+

hoil ... ik(X - Xio)(x - Xii) ' .. (x - Xik_J is a Newton representation of the partially interpolating polynomial

P ioil ... ik' The coefficients (2.1.3.2) are called k th divided differences.

The recursion (2.1.2.1) for the partially interpolating polynomials trans- lates into the recursion

(2.1.3.5)

for the divided differences, since by (2.1.3.3), fl ... ik and fio ... i H are the coefficients of the highest terms of the polynomials Pi1i2 ... ik and Pioi1 ... ik-l'

respectively. The above recursion starts for k =

°

with the given support ordinates f, i = 0, ... , n. It can be used in various ways for calculating divided differences fo' foil' ... ,fioil ... in' which then characterize the desired interpolating polynomial P = Pioi1 ... in'

Because the polynomial Pioi1 ... ik is uniquely determined by the support points it interpolates [Theorem (2.1.1.1 )], the polynomial is invariant to any permutation of the indices io , i1 , ••• , ik , and so is its coefficientfioil ... ik of Xk.

Thus:

(2.1.3.6). The divided differences .[;oil ... ik are invariant to permutations of the indices io , i1, ... , ik : If

is a permutation of the indices io, ib ... , ik , then

If we choose to calculate the divided differences in analogy to Neville's method-instead of, say, Aitken's method-then we are led to the following tableau, called the divided-difference scheme:

k=O k=1 k=2 k=n

Xo fo

f01

Xl f1 f012

(2.1.3.7) f12

x2 f2 f012 ...•

f.-2.n-1,n

X.

f.

The entries in the second column are of the form

.. "

those in the third column, r _ f12 - fOl

J012 - ,

X2 - Xo

Clearly,

P(x)= P01 ... n(x)

=fo

+

fOl(X - xo)

+ ... +

f01. .. n(x - xo)(x - Xl)'" (X - xn-d is the desired solution to the interpolation problem at hand. The coefficients of the above expansion are found in the top descending diagonal of the divided-difference scheme (2.1.3.7).

EXAMPLE. With the numbers of the example in sections 2.1.1 and 2.1.2, we have:

Xo = 0 Jo = 1

JOI = 2

Xl = 1 Jl = 3 J012 = -6" 5

J12 =

-1

X2 - 3 J2 = 2

POI2(X) = 1 + 2(x - 0) - i(x - O)(x - 1), Po12(2) = (-i(2 - 1) + 2)(2 - 0) + 1 = 1jl-.

instead of building the divided-difference scheme column by column, one might want to start with the upper left corner and add successive ascending diagonal rows. This amounts to adding new support points one at a time after having interpolated the previous ones. In the following ALGOL

procedure, the entries in an ascending diagonal of (2.1.3.7) are found, after each increase of i, in the top portion of array t, and the first i coefficients fOl '" i are found in array a.

for i := 0 step 1 until n do begin t[i]:= f[i];

end;

for j := i - 1 step - 1 until 0 do

tU]

:=

(tU +

1] - t[;])/( (x[i] - x[;]);

a[i] := t[O]

Afterwards, the interpolating polynomial (2.1.3.1) may be evaluated for any desired argument z:

p :=a[n];

for i := n - 1 step - 1 until 0 do p := p X (z - x[i])

+

a[i];

Some Newton representations of the same polynomial are numerically more trustworthy to evaluate than others. Choosing the permutation so that

k = 0, 1, ... , n - 1,

damperis the error (see Section 1.3) during the Horner evaluation of (2.1.3.8)

P(x} == Pio ...

d

x } == ho

+

hOi! (x - xio )

+ ... +

hOi! ... in(X - Xio} ... (x - xin _

J

All Newton representations of the above kind can be found in the single divided-difference scheme which arises if the support arguments Xi'

i

=

0, ... , n, are ordered by size: Xi < Xi+ I for i

=

0, ... , n - 1. Then the preferred sequence of indices io, ii, ... , ik is such that each index ik is

"adjacent" to some previous index. More precisely, either ik = min{izl

°

~

1 < k} - 1 or ik

=

max{izl

°

~ 1 < k}

+

1. Therefore the coefficients of (2.1.3.8) are found along a zigzag path-instead of the upper descending diagonal-of the divided-difference scheme. Starting with ho' the path proceeds to the upper right neighbor if ik < ik - l , or to te lower right neigh- bor if ik > ik - l •

EXAMPLE. In the previous example, a preferred sequence for ~ = 2 is io = 1, i1 = 2, i2 = O.

The corresponding path in the divided difference scheme is indicated below:

Xo = 0 10 = 1

J01 = 2

Xl = 1 11 = 3 1012 =

-i

112 =

-i

X2 = 3 12 = 2 The desired Newton representation is:

P12O(X) == 3 - t(x - 1) - i(x - l)(x - 3), P120(2) = (-i(2 - 3) - i)(2 - 1) + 3 =

Jf-.

Frequently, the support ordinates

h

are the values J(xJ

= h

of a given functionJ(x}, which one wants to approximate by interpolation. In this case, the divided differences may be considered as multivariate functions of the support arguments Xi' and are historically written as

These functions satisfy (2.1.3.5). For instance, f[xo] ==f(xo),

f[xo, Xl] == f[x l] - f[x o] == f(xd - f(xo) ,

Xl - Xo Xl - Xo

f[ ] - f[xl , x2] - J[xo,

xd

xo, Xl> X2 =

x2 - Xo

= f(X2)(Xl - xo) - f(xd(X2 - xo)

+

f(XO)(X2 - Xl)

- (Xl - XO)(X2 - XO)(X2 -

xd

Also, (2.1.3.6) gives immediately:

(2.1.3.9) Theorem. The divided differences f[ Xio' ... , XiJ are symmetric func- tions of their arguments, i.e., they are invariant to permutations of the

If the function

f

(x) is itself a polynomial, then we have the (2.1.3.10) Theorem. Iff (x) is a polynomial of degree N, then

f[xo, ... , Xk]

=

0

for k > N.

PROOF. Because of the unique solvability of the interpolation problem (Theorem 2.1.1.1), po .... ,k(X) ==f(x) for k ~ N. The coefficient of Xk in Po, .... k(X) must therefore vanish for k > N. This coefficient, however, is given

by f[xo , ... , Xk] according to (2.1.3.3). D

EXAMPLE.f(x) = x2 •

k=O 2 3 4

o

0

1

3 o

2 4 o

5 o

3 9

7

4 16

If the function

f

(x) is sufficiently often differentiable, then its divided differences f[ Xo, ... , Xk] can also be defined if some of the arguments Xi

coincide. For instance, iff(x) has a derivative at Xo, then it makes sense for certain purposes to define

f[x o, xo] := f'(x o)·

For a corresponding modification of the divided-difference scheme (2.1.3.7) see Section 2.1.5 on Hermite interpolation.

2.1.4 The Error in Polynomial Interpolation

Once again we consider a given function

f

(x) and certain of its values /; = f(Xi), i = 0, 1, ... , n,

which are to be interpolated. We wish to ask how well the interpolating polynomial P(x):: Po ... n(x) with

P(xi ) = /; , i = 0, 1, ... , n

reproducesf(x) for arguments different from the support arguments Xi. The error

f(x) - P(x),

where x

=f

Xi' i = 0, 1, can clearly become arbitrarily large for suitable func- tions

f

unless some restrictions are imposed on! Under certain conditions, however, it is possible to bound the error. We have, for instance:

(2.1.4.1) Theorem. If the function f has an (n

+

1 )st derivative, then for every argument

x

there exists a number ~ in the smallest interval I[ Xo, ... , Xn,

x]

which contains

x

and all support abscissas Xi' satisfying

_ _ W(X)J<n+ l)(~)

f(x) - POI ... n(X) = (n

+

1)! '

where

w(X) :: (x - XO)(X -

Xd ...

(x - Xn).

PROOF. Let P(X): :: Po I ... n(x) be the polynomial which interpolates the function at Xi' i = 0, 1, ... , n, and suppose

x =f

Xi (for

x

= Xi there is nothing to show). We can find a constant K such that the function

F(x) :::f(x) - P(x) - Kw(x) vanishes for x =

x:

F(x)

=

0.

Consequently, F(x) has at least the n

+

2 zeros xo, ... , xn , X

in the intervalI[xo, ... , xn, x]. By Rolle's theorem, applied repeatedly, F'(x) has at least n

+

1 zeros in the above interval, F"(x) at least n zeros, and finally F(n+ l)(X) at least one zero ~ E I[xo, ... , xn, x].

Since p(n+1)(x) == 0,

F(n+ 1)(~)

=

Jln+ 1)(~) - K(n

+

1)!

= °

or

This proves the proposition

J(x) - P(x)

=

Kw(x)

=

w(x) Jln+

1)(~).

(n

+

1)!

o

A different error term can be derived from Newton's interpolation for- mula (see Section 2.1.3):

P(x) == P01 ... n(x) ==J[xo]

+

J[xo, xd(x - xo)

+ ...

+

J[xo, Xl' ... , xn](x - Xo) ... (X - xn-

d.

Heref[xo, Xl> ... , xk ] are the divided differences of the given function! If in addition to the n

+

1 support points

(Xi> .t;):

.t; =

J(x;), i

=

0, 1, ... , n, we introduce an (n

+

2)nd support point

(xn+l,fn+d: Xn+l :=x, fn+l :=f(x), where

i

=

0, ... , n, then by Newton's formula

J(x)

=

Po ... n+ l(X)

=

Po ... n(x)

+

J[xo, ... , xn, x]w(x), or

(2.1.4.2) J(x) - Po ... n(x) = w(x)J[xo, ... , xn,

Xl

The difference on the left-hand side appears in Theorem (2.1.4.1), and since w(x) =1= 0, we must have

_ Jln+ 1)(~)

J[xo, ... , xn , x] = (n

+

I)! for some ~ E I[xo, ... , xn ,

xl

This also yields

f(")(e)

(2.1.4.3) f[ xo, ... , x,,] = - , -n. for some

e

E I[ xo, ... , x,,], which relates derivatives and divided differences.

EXAMPLE.!(X) = sin x:

Xi = ni 10' i = 0, 1, 2, 3, 4, 5, n = 5,

. -sin ~

SIn x - P{x) == (x - xo)(x - xd ... {x - xs)----no'

I

sin x - P{x)

I ~

'7k1 (x - xo)(x - Xl) ... (x - xs)1 =

I~;~I.

We end this section with two brief warnings, one against trusting the interpolating polynomial outside of I[ xo, ... , x,,] and one against expecting too much of polynomial interpolation inside I[xo, ... , x,,].

In the exterior of the interval I[ xo, ... , x,,], the value of I w(x) I in Theorem (2.1.4.1) grows very fast. The use of the interpolation polynomial P for approximating

f

at some location outside the interval I[xo, ... , x,,]-<:alled extrapolation-should be avoided if possible.

Within I[ xo, ... , x,,] on the other hand, it should not be assumed that finer and finer samplings of the function

f

will lead to better and better approximations through interpolation.

Consider a real function

f

which is infinitely often differentiable in a given interval [a, b]. To every interval partition L\

=

{a

=

Xo < Xl < ... <

x" = b} there exists an interpolating polynomial P A E II" with P A(Xi) =

fi

for

Xi E L\. A sequence of interval partitions

L\m

=

{a

= xb

m) < x\m) < ... < x!:;:)

=

b}

gives rise to a sequence of interpolating polynomials PAm. One might expect the polynomials PAm to converge toward

f

if the fineness

lIL\mll

:=max Ix\7l - x\m) I

i

of the partitions tends to 0 as m --+ 00. In general this is not true. For example, it has been shown for the functions

f(x) == 1

+

1 x2' [a, b] = [-5,5], or f(x) ==

-IX,

[a, b] = [0, 1],

that the polynomials PAm do not converge pointwise to

f

for arbitrarily fine

.l" • • A (m) ·(b)"· 0

unhorm partltIons L.>m, Xi

=

a

+

I - a ~ m, I

= , ... ,

m.

2.1.5 Hermite Interpolation

Consider the real numbers ~i' ylk), k = 0, 1, ... , ni - 1, i = 0, 1, ... , m, with

~O<~l<···<~m·

The Hermite interpolation problem for these data consists of determining a polynomial P whose degree does not exceed n, where

m

n

+

1:=

L

ni>

i=O

and which satisfies the following interpolation conditions:

k

=

0, 1, ... , ni - 1, i

=

0, 1, ... , m.

This problem differs from the usual interpolation problem for polynomials in that it prescribes at each support abscissa ~i not only the value but also the first ni - 1 derivatives of the desired polynomial. The polynomial inter- polation of Section 2.1.1 is the special case ni = 1, i = 0, 1, ... , m.

There are exactly

L

ni = n

+

1 conditions (2.1.5.1) for the n

+

1 coefficients of the interpolating polynomial, leading us to expect that the Hermite interpolation problem can be solved uniquely:

(2.1.5.2) Theorem. For arbitrary numbers ~o < ~ 1 < ... < ~m' ylkl, k = 0, 1, ... , ni - 1, i = 0, 1, ... , m, there exists precisely one polynomial

m

n

+

1:=

L

ni>

i=O

which satisfies (2.1.5.1).

PROOF. We first show uniqueness. Consider the difference polynomial Q(x}:=P1(x} - P2(x} of two polynomials Pi> P2 E fin for which (2.1.5.1) holds. Since

k = 0, 1, ... , ni - 1, i = 0, 1, ... , m,

~i is at least an ni-fold root of Q, so that Q has altogether

L

ni = n

+

1 roots,

each counted according to its mUltiplicity. Thus Q must vanish identically, since its degree is less than n

+

1.

Existence is a consequence of uniqueness: For (2.1.5.1) is a system of n linear equations for n unknown coefficients Cj of P(x} = Co

+

Cl X

+

... +

Cn xn. The matrix of this system is not singular, because of the uni- queness of its solutions. Hence the linear system (2.1.5.1) has a unique solution for arbitrary right-hand sides ylk). D Hermite interpolating polynomials can be given explicitly in a form ana- logous to the interpolation formula of Lagrange (2.1.1.4). The polynomial

PEnn given by

m ",-1

(2.1.5.3) P(x) =

L L

y\k) Lik(X)

i=O k=O

satisfies (2.1.5.1). The polynomials Lik E nn are generalized Lagrange poly- nomials. They are defined as follows: Starting with the auxiliary polynomials

[compare (2.1.1.3)], put

L i. n,-l (x):= Ii. n,-l (x), i

=

0,1, ... , m, and recursively for k

=

ni - 2, ni - 3, ... , 0,

",-1

Lik(X):= lik(X) -

L

~;;)(~;}Liv(X),

v=k+l

By induction

L!a)g.)

=

{I if i

=

j and k

=

0', ,k J

°

otherwise

Thus Pin (2.1.5.3) is indeed the desired Hermite interpolating polynomial.

In order to describe alternative methods for determining P, it will be useful to'represent the data ~i' y\k), i

=

0,1, ... , m, k

=

0,1, ... , ni - 1, in a somewhat different form as a sequence ff'n = {(x;,};)};=o . .... n ofn

+

1 pairs of numbers. The pairs

(xo ,fo), (X1,f1), ... , (Xnl -1,

J..

1 -1), (Xn1,

J..

1), ... , (Xn'

J..)

of ff' n denote consecutively the pairs

(J; "O,Yo (0») (J; , "O,Yo , ... , "O,Yo (1») (J; (n1-1») ,"l,Yl (J; (0») , ... , "m,Ym (J; ( .... -1») . Note that Xo ~ Xl ~ ... ~ Xn and that the number ~i occurs exactly ni times in the sequence {Xi}i=O ... n'

EXAMPLE 1. Suppose m

=

2, nl

=

2, n2

=

3 and

~o

=

0, YbO)

=

-1, Ybl)

=

-2;

~ 1

=

1, y\O)

=

0, y\l)

=

10, y\2)

=

40.

This problem is described by the sequence !i' 4

=

{(X;, Ji)h = 0 ... 4:

(XO ,fo) = (0, -1), (XI,fI) = (0, -2), (X3 ,f3)

=

(1, 10),

(X2 ,f2) = (1, O~

(X4,f4)

=

(1, 40).

Given any Hermite interpolation problem, it uniquely determines a se- quence iF n' as above. Conversely, every sequence iF n

=

{(Xi' .Ii)}; = 0, ... , n of n

+

1 pairs of numbers with Xo ~ Xl ~ ... ~ Xn determines a Hermite inter- polation problem, which will be referred to simply as iFn • It also will be convenient to denote by

the polynomials

[x - xo]o := 1,

[x - xo]j:= (x - xo)(x - xd ... (x - Xj_

d

(2.1.5.4) of degreej.

Our next goal is to represent the polynomial P which interpolates iF n in Newton form [compare (2.1.3.1)]:

(2.1.5.5) P(x)

=

ao

+

al[x - xo]

+

a2[x - XO]2

+ ... +

an[x - xof and to determine the coefficients ai with the help again of divided differences

(2.1.5.6) k = 0,1, ... , n.

However, the recursive definition (2.1.3.5) of the divided differences has to be modified because there may be repetitions among the support abscissae Xo ~ Xl ~ ... ~ Xn· For instance, if Xo

=

Xl' then the divided difference f[xo, Xl] can no longer be defined as (f[xo] - f[xd)/(XI - xo).

The extension of the definition of divided differences to the case of repeated arguments involves transition to a limit. To this end, let

Co < CI < ... < Cn

be mutually distinct support abscissas, and consider the divided differences f[Ci' ... , Ci H] which belong to the function f(x) := P(x), where the polyno-

mial P is the solution of the Hermite interpolation problem iFn • These divided differences are now well defined by the recursion (2.1.3.5), if we let .Ii := P(Ci) initially. Therefore, and by (2.1.3.5),

(2.1.5.7a) (2.1.5.7b) (2.1.5.7c)

n

P(x) =

L

aj[x - Co]j, aj:= f[Co, Ct, ... , 'j],

j=O

f[

Y. Y. Y.] _ f[Ci+I' Ci+2, ... , CiH] -

f[C

Ci+t, ... , CiH-I]

':.,' ':.,+ b •.. , ':.,+k - ---=---=----~--=---=----=---~

CiH - Ci '

for i = 0, 1, ... , n, k = 1, ... , n - i. Since Xo ~ x t ~ ... ~ Xn, all limits f[Xi' Xi+b ... , XiH]:= lim f['i' Ci+l' ... , CiH]

'j"'Xj i~j~i+k

exist provided they exist for indices i, k with Xi = Xi + 1 = ... = Xi +k. The latter follows from (2.1.4.3), which yields

(2.1.5.8)

}~

f[(i' (i+ 1, ... , (i+k] =

:!

p(k)(Xi)

'J J

iSj:5,i+k

if Xi = Xi+l = ... = Xi+k •

We now denote by r = r(i) ~ 0 the smallest index such that

Xr = Xr + 1 = ... = Xi·

Then due to the interpolation properties of P with respect to g;n' p(k)(Xi)

=

p(k)(Xr )

=

.f..+k'

so that by (2.1.5.8)

f[ ] .f..(i)+k

Xi> X i+1' ... , Xi+k = ~ if Xi = Xi +1 = ... = Xi+k •

In the limit (r-+ Xj' (2.1.5.7) becomes

n

(2.1.5.9a) P(x) =

L

aj[x - xj]j, aj'= h[xo, Xl' ... , Xj]

j=O

(2.1.5.9b) (2.1.5.9c)

f[ ]

._.f..(i)

+k

Xi' Xi + b ... , Xi+k .-~ if Xi = Xi+k

otherwise.

(Note that Xo ~ Xl ~ ... ~ Xn has been assumed.) These formulas now permit a recursive calculation of the divided differences and thereby the coefficients aj of the interpolating polynomial P in Newton form.

EXAMPLE 2. We illustrate the calculation of the divided differences with the data of Example 1 (m = 2, nl = 2, n2 = 3):

ff 4 = {(O, -1), (0, - 2), (1, 0), (1, 10), (1, 40)}.

The following difference scheme results:

xo=O -1*=j[xo]

-2* =j[xo,

xd

Xl=O -1*=j[xd 3=J[XO,Xl,X2]

1=J[xI,x2] 6=J[xo, ... ,X3]

x2=10*=J[x2] 9=J[Xl,X2,X3] 5=J[xo, ... ,X4]

10* =J[X2' X3] 11 =J[x/, ... , X4]

X3 = 1 0* = J[X3] 20* = J[X2, X3, X4]

10* = J[X3, X4]

x4=10*=J[x4]

The entries marked * have been calculated using (2.1.5.9b) rather than (2.1.5.9c). The coefficients of the Hermite interpolating polynomial can be found in the upper diagonal of the difference scheme:

P(x) = -1 - 2[x - xo] + 3[x - XO]2 + 6[x - xot + 5[x - xot

= -1 - 2x + 3x2

+

6x2(x - 1) + 5x2(x - 1)2.

The interpolation error which is incurred by Hermite interpolation can be estimated in the same fashion as for the usual interpolation by polyno- mials. In particular, the proof of the following theorem is entirely analogous to the proof of Theorem (2.1.4.1):

(2.1.5.10) Theorem. Let the real function f be n

+

1 times differentiable on the interval [a, b], and consider m

+

1 support abscissae ~i E [a, b],

~0<~1<"'<~m' If the polynomial P(x) is of degree at most n,

m

Ini

=

n

+

1,

i=O

and satisfies the interpolation conditions

k

=

0,_ 1, ... , ni - 1, i

=

0, 1, ... , m, then to every X E [a, b] there exists ~ E I[~o, ... , ~m' x] such that

f( -) _ P(-) = w(x)f(n+ 1)(~}

X x (n

+

1)! '

where

Hermite interpolation is frequently used to approximate a given real function

f

by a piecewise polynomial function cpo Given a partition

~:= a

=

~o < ~1 < ... < ~m

=

b

of an interval [a, b], the corresponding Hermite function space Ht;) is defined as consisting of all functions cp: [a, b] --+ IR with the following properties:

(2.1.5.11~

(a) cP E C -1 [a, b]: The (v - 1 )st derivative of cp exists and is continuous on [a, b].

(b) cplIi E 02v-1: On each subinterval Ii :=[~;. ~i+d, i = 0,1, ... , m - 1, cp agrees with a polynomial of degree at most 2v - 1.

Thus the function cp consists of polynomial pieces of degree 2v - 1 or less which are v - 1 times differentiable at the" knots " ~i . In order to approxi-

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 48-68)