• Tidak ada hasil yang ditemukan

Interpolation by Spline Functions

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 103-125)

Interpolation

2.4 Interpolation by Spline Functions

Spline functions yield smooth interpolating curves which are less likely to exhibit the large oscillations characteristic of high-degree polynomials. They are finding applications in graphics and, increasingly, in numerical methods.

For instance, spline functions may be used as trial functions in connection with the Rayleigh-Ritz-Galerkin method for solving boundary-value prob- lems of ordinary and partial differential equations. Introductions are for instance Greville (1969), Schultz (1973), Bohmer (1974), and de Boor (1978).

2.4.1 Theoretical Foundations

Let A:={a

=

Xo < Xl < ... < Xn

=

b} be a partition of the interval [a, b].

(2.4.1.1) Definition. A cubic spline (function) S!1 on A is a real function S!1 : [a, b] ---. IR with the properties:

(a) S!1 E C2[a, b], that is, S!1 is twice continuously differentiable on [a, b].

(b) S!1 coincides on every subinterval [Xi> Xi + 1]' i = 0, 1, ... , n - 1, with a polynomial of degree three.

Thus a cubic spline consists of cubic polynomials pieced together in such a fashion that their values and those of their first two derivatives coincide at the knots Xi> i = 1, ... , n - 1.

Consider a set Y:= {Yo, Yl, ... , Yn} of n

+

1 real numbers. We denote by S!1(Y; . )

an interpolating spline function S!1 with S!1(Y; Xi)

=

Yi for i

=

0,1, ... , n.

Such an interpolating spline function S&(Y; .) is not uniquely determined by the set Y of support ordinates. Roughly speaking, there are still two degrees of freedom left, calling for suitable additional requirements. The following three additional requirements are most commonly considered:

(2.4.1.2)

(a) S&(Y; a) = S&(Y; b) = 0,

(b) S~)(Y; a) = S~)(Y; b) for k = 0, 1,2: S&(Y;·) is periodic, (c) S~(Y; a) = Y~, S~(Y; b) = Y~ for given numbers Y~, Y~.

We will confirm that each of these three conditions by itself ensures unique- ness of the interpolating spline function S&(Y; .). A prerequisite of the condition (2.4.l.2b) is, of course, that Yn = Yo.

For this purpose, and to establish a characteristic minimum property of spline functions, we consider the sets

(2.4.1.3) ffm[a, b],

m > 0 integer, of real functions f: [a, b] ~ IR for which j<m -1) is absolutely continuous3 on [a, b] andj<m) E I:[a, b].4 By

ff;[a, b]

we denote the set of all functions in ffm[ a, b] with j<k)(a) = j<k)(b) for k = 0, 1, ... , m - 1. We call such functions periodic, because they arise as restric- tions to [a, b] of functions which are periodic with period b - a.

Note that S& E ff3[a, b], and that S&(Y; .) E ff;[a, b] if (2.4.1.2b) holds.

If f E ff2[ a, b], then we can define

b

IlfI12:=

f

I f"(x)12 dx.

a

Note that II

f

II ~ O. However, II

f

II = 0 may hold for functions which are not identically zero, for instance, for all linear functions

f

(x)

=

cx

+

d.

We proceed to show a fundamental identity due to Holladay [see for instance Ahlberg, Nilson, and Walsh (1967)].

(2.4.1.4) Theorem. If fE ff2(a, b),

if

L\ = {a = Xo < Xl < ... < Xn = b} is a partition of the interval [a, b], and

if

S & is a spline function with knots Xi E L\, then

Ilf -

S& 1/2 =

I/fl/

2 - I/S& 1/2

- 2 [(f'(X) -

S~(x))S&(x)l~

-

Jl

(f(x) -

S&(x))S~(x)I~~_l J.

3 See footnote 2 in Section 2.3.4.

4 The set E[a, b] denotes the set of all real functions whose squares are integrable on the interval [a, b], i.e.,

S:

1 f(t) 12 dt exists and is finite.

Here g(x) I~ stands for g(u) - g(v), as it is commonly understood in the calculus of integrals. It should be realized, however, that Sr(x) is piecewise constant with possible discontinuities at the knots Xl' ... , xn _ l' Hence we have to use the left and right limits of Sr(x) at the locations Xi and Xi-l>

respectively, in the above formula. This is indicated by the notation Xi-, Xi~ l'

PROOF. By the definition of II'

II,

b

IIf - S!> 112

= f

I f"(x) -

S~(x)

12 dx

/I

=

IIfll2 - 2

f

b

f"(x)S~(x)

dx

+

liS!> 112

/I

b

= IIfll2 - 2

f

(f"(x) -

S~(x))S~(x)

dx - liS!>

V

/I

Integration by parts gives for i

=

1, 2, ... , n

fi

(f"(x) -

S~(x))S~(x)

dx = (f'(x) -

SA(X))SA(X)I~:_l

Xi-l

-fi

(f'(x) - SA(X))Sr(x) dx

Xi-l

=

(f'(x) _-SA(x))S~(x)I~:_l - (f(x) - S!>(x))Sr(x)·I~~_l

+ fi

(f(x) -

S!>(X))S~4)(X)

dx.

Xi-l

Now S(4)(X) == 0 on the subintervals (Xi-l, Xi), andf', SA, S~ are continuous on [a, b]. Adding these formulas for i = 1,2, ... , n yields the proposition of the theorem, since

L

n (f'(x) - SA(x))S~(x)I~:_l

=

(f'(x) - SA(x))S~(x)I~· D

i= 1

With the help of this theorem we will prove the important minimum-norm property of spline functions.

(2.4.1.5) Theorem. Given a partition A:= {a

=

Xo < Xl < ... < Xn

=

b} of the interval [a, b], values Y:= {Yo,,,., Yn} and a function f E Jt'"2[a, b] with f(Xi) = Yi,for i = 0, 1, ... , n, then IIfll2 ~ IIS!>(Y; . )112, and more precisely

IIf - S!>(Y; . )11 2

=

IIfll2 - IIS!>(Y; . )11 2 ~ 0

holdsfor every spline function S!>(Y; .), provided one of the conditions [com- pare (2.4.1.2)]

(a) S~(Y; a) = S~(Y; b) = 0, (b) f E Jt'"~(a, b), S!>(Y; .) periodic, (c) f'(a)

=

SA(Y; a),f'(b)

=

SA(Y; b),

is met. In each of these cases, the spline function SA(Y;·) is uniquely determined.

The existence of such spline functions will be shown in Section 2.4.2.

PROOF. In each of the above three cases (2.4.1.5a, b, c), the expression (J'(x) - S~(x))S~(x)l~ -

L

n (J(x) - SA(x))S~(x)I~L

= °

i= 1

vanishes in the Holladay identity (2.4.1.4) if SA == SA(Y; . ). This proves the minimum property ofthe spline function SA(Y; .). Its uniqueness can be seen as follows: suppose ~(Y; .) is another spline function having the same properties as SA(Y;·) .. Letting SA(Y;·) play the role of the function

f

E f2[a, b] in the theorem, the minimum property of SA(Y; .) requires that

and since SA(Y; .) and SA(Y; .) may switch roles,

b

IISA(Y; .) - SA(Y; . )11 2

= f (S~(Y;

x) -

S~(Y;

x))2 dx = 0.

a

Since S~(Y; . ) and S~(Y; . ) are both continuous, S~(Y; x} == S~(Y; x}, from which

SA(Y; x) == SA(Y; x)

+

cx

+

d

follows by integration. But SA(Y; x)

=

SA(Y; x) holds for x

=

a, b, and this

implies c

=

d

=

0. 0

The minimum-norm property of the spline function expressed in Theorem (2.4.1.5) implies in case (2.4.1.2a) that, among all functions

f

in

f2[a, b] with f(xi)

=

Yi' i

=

0, 1, ... , n, it is precisely the spline function SA(Y; .) with S~(Y; x) =

°

for x = a, b that minimizes the integral

b

IIfl12 =

J I

f"(x)12 dx.

a

The spline function of case (2.4.l.2a) is therefore often referred to as the natural spline function. (In cases (2.4.1.2b) and (2.4.1.2c), the corresponding spline functions SA(Y;·) minimize Ilfll over the more restricted sets f;[a, b] and {J E f2[a, b]

I

f'(a)

=

Yo, f'(b) - y~} n {J I f(x;)

=

Yi for

i = 0, 1, ... , n}, respectively.

The expressionf"(x)(l

+

f'(k)2t 3/2

indicates the curvature of the function f(x) at x E [a, b]. If f'(x) is small compared to 1, then the curvature is approximately equal tof"(x). The value II

f

II provides us therefore with an approximate measure of the total curva- ture of the function

f

in the interval [a, b]. In this sense, the natural spline function is the "smoothest" function to interpolate given support points (Xi' Yi), i = 0,1, ... , n.

The concept of spline functions has been generalized and modified in many ways. Polynomials of degree 2m

+

1 are sometimes used to define spline functions srg E C2m[a, b], that is, piecewise polynomial functions with continuous 2mth derivatives. These functions share many properties with the

"cubic" spline functions (m

=

1) defined in this section [see Greville (1969), de Boor (1972)].

2.4.2 Determining Interpolating Spline Functions

In this section, we will describe computational methods for determining cubic spline functions which assume prescribed values at their knots and satisfy one of the side conditions (2.4.1.2). In the course of this, we will have also proved the existence of such spline functions; their uniqueness has already been established by Theorem (2.4.1.5).

In what follows, ~

=

{Xi

I

i

=

0, 1, ... , n} will be a fixed partition of the interval [a, b] by knots a

=

Xo < Xl < ... < Xn

=

b, and Y

=

{Yi

I

i

=

0, 1, ... , n} will be a set of n

+

1 prescribed real numbers. In addition let

hj +1 :=xj +1 - Xj' j

=

0,1, ... , n - 1.

We refer to the values of the second derivatives at knots Xj E ~,

(2.4.2.1) Mj:= S~(Y; Xj), j

=

0, 1, ... , n.

of the desired spline function S~(Y; . ) as the moments Mj of S~(Y; . ). We will show that spline functions are readily characterized by their moments, and that the moments of the interpolating spline function can be calculated as the solution of a system of linear equations.

Note that the second derivative S:\(Y; .) of the spline function coincides with a linear function in each interval [Xj' Xj+l],j = 0, ... , n - 1, and that these linear functions can be described in terms of the moments Mi of

S~(Y; .):

S"(Y.x)=M. Xj +~, J h 1 - X

+

M j+l-h--X-Xj l" lor X E [ Xj' Xj+l· ]

j+ 1 j+ 1

By integration,

(2422) S'(Y·x)=_M.(Xj+l-xf+M. (x-xJ2 A

. • • ~, J 2h j+l J+ 1 2h j+l

+

j '

S(Y·X)=M.(Xj+l- X~, )3 M (X-Xj)3 A( ) B

J 6h

+

j+ 1 6h

+

j X - Xj

+

j

j+l j+l

for x E [Xj, Xj+d,j = 0,1, ... , n - 1, where Aj , Bj are constants of integra- tion. From S&(Y; Xj) = Yj, S&(Y; Xj+

d

= Yit 1> we obtain the following equations for these constants Aj and Bj :

Consequently, (2.4.2.3)

M. hJ+l

J 6

This yields the following representation of the spline function in terms of its moments:

(2.4.2.4)

S&(Y; x)

=

(Xj

+

Pix - Xj)

+

yix - XY

+

bj(X - Xj)3 for x E [Xj, xj+d, where

_ Yj+l - Yj 2Mj

+

Mj+1 h

- h - 6 j+l,

j+l

~.:=Sr{Y;xn =Mj+1 -Mj

UJ

6 6hj+ 1

Thus S&{Y; .) has been characterized by its moments Mj • The task ofca1cu- lating these moments will now be addressed.

The continuity of S~( Y; . ) at the knots x

=

Xj,j

=

1,2, ... , n - 1 [namely, the relations S~{Y; xj)

=

S~{Y; x7)] yields n - 1 equations for the moments M j ' Substituting the values (2.4.2.3) for A j and Bj in (2.4.2.2) gives

, . {Xj+l - X)2 (x - XJ2

S&(Y, x)

= -

Mj 2h

+

Mj+ 1 -'--2-=-=h----'-"--

j+l j+l

+

Yj+ h 1 - Yj _ hj + 6 1 (M. J+ 1 _ M.) J .

j+ 1

For j = 1, 2, ... , n - 1, we have therefore

'( . _) _ Yj - Yj-l hj hj

S& Y, Xj - h. + 3"Mj + "6Mj-l,

J

and since S~{Y;

xi) =

S~{Y;

xj),

(2.4.2.5) hj MJ'-l + hj + hj+ I MJ• + hj+ I MJ.+ I =

yj~l

- Yj

6 3 6 j+ I

for j = 1, 2, ... , n - 1. These are n - 1 equations for the n

+

1 unknown

moments. Two further equations can be gained separately from each of the side conditions (a), (b), and (c) listed in (2.4.1.2).

Case (a): S~{Y; a) = Mo = 0 = Mn = S~{Y; b).

Case (b): S~{Y; a)

=

S~{Y; b)=Mo

=

Mn ,

, ( ) , ( b) hn hn

+

hi hi

S&Y;a=S&Y; ="6Mn-l+ 3 Mn+(;M I

The latter condition is identical with (2.4.2.5) for j

=

n if we put Mn+l :=Ml , Yn+l := Yl' Recall that (2.4.1.2b) requires Yn = Yo.

S' (y.) , hi M hi M Yl - Yo

Case (c): & ,a =Yo=3 0+(; 11= hi -Yo,

The last two equations, as well as those in (2.4.2.5), can be written in a common format:

IljMj-1

+

2Mj

+

A.jMj+ I

=

dj , upon introducing the abbreviations

j

=

1, 2, ... , n - 1,

k:= hj+l 1l.:=1-A..= hj

J hj+hj+l J J hj+hj+l

(2.4.2.6) { }

d.:= 6 Yj+I-Yj_Yj-Yj-1

J hj+hj+l hj+l hj

j

=

1, 2, ... , n - 1.

In case (a), we define in addition

(2.4.2.7) do :=0, Il" :=0,

and in case (c) (2.4.2.8)

Il" := 1,

This leads in cases (a) and (c) to the following system of linear equations for the moments Mi :

2Mo

+

AoM 1

1l1 M O

+

2M 1

+

AIM2

1l,,-lM,,-2

+

2M"_1

+

A"_l M ,,

=

d,,-l'

Il"M"-l

+

2M"

=

d".

In matrix notation, we have

2 AO 0 Mo

III 2 Al Ml

(2.4.2.9) 112

2 A,,-l

0 Il" 2 M"

The periodic case (b) also requires further definitions,

._ hI ._ h"

A" .- h"

+

hI ' Il" .-1 - A" = h"

+

hI ' d:= 6 {YI-Y,,_Y,,-Y,,-l}

" h"

+

hI hI h,,' (2.4.2.10)

which then lead to the following linear system of equations for the moments M l, M 2, ... , M,,(= Mo):

2 Al III Ml dl

112 2 A2 M2 d2

(2.4.2.11 )

The coefficients A;, Jl;, di in (2.4.2.9) and (2.4.2.11) are well defined by (2.4.2.6) and the additional definitions (2.4.2.7), (2.4.2.8), and (2.4.2.10), re- spectively. Note in particular that in (2.4.2.9) and (2.4.2.11)

(2.4.2.12) Ai ~ 0, Jli ~ 0, Ai

+

Jli

=

1

for all coefficients Ai' Jli' and that these coefficients depend only on the location of the knots x j E A and not on the prescribed values Yi E Y nor on Yo, Y~ in case (c). We will use this observation when proving the following:

(2.4.2.13) Theorem. The systems (2.4.2.9) and (2.4.2.11) of linear equations are nonsingular for any partition A of [a, b].

This means that the above systems of linear equations have unique solutions for arbitrary right-hand sides, and that consequently the problem of interpo- lation by cubic splines has a unique solution in each of the three cases (a), (b), (c) of (2.4.1.2).

PROOF. Consider the (n

+

1) x (n

+

1) matrix

2 Ao 0

Jl1 2 A1

A= Jl2

2 An-1

0 Jln 2

of the linear system (2.4.2.9). This matrix has the following property:

(2.4.2.14) Az = w => max IZil ~ max Iwd

i i

for every pair of vectors Z = (zo, ... , znf, w = (wo, ... , wnf,z, w E IRn + 1.

Indeed, let r be such that I Zr 1 = maxi 1 Zi I. From Az = w, flr Zr-1

+

2zr

+

ArZr+ 1 = Wr (flo := 0, An:= 0)

By the definition of r and because flr

+

Ar = 1,

maxlwd ~ Iwrl ~2IZrl-JlrIZr-11-Arlzr+d

i

~ 21 Zr 1 - flr 1 Zr 1 - Ar 1 Zr 1

= (2 - Jlr - Ar)lzrl

= IZrl =maxlzd

i

Suppose the matrix A were singular. Then there would exist a solution z

+- °

of Az

=

0, and (2.4.2.14) would lead to the contradiction

°

< max

I

Zi

I :::;

0.

i

The nonsingularity of the matrix in (2.4.2.11) is shown similarly. 0 To solve the equations (2.4.2.9), we may proceed as follows: subtract Ji.l /2 times the first equation from the second, thereby annihilating Ji.l' and then a suitable multiple of the second equation from the third to annihilate Ji.2 , and so on. This leads to a "triangular" system of equations which can be solved in a straightforward fashion [note that this method is the Gaussian elimina- tion algorithm applied to (2.4.2.9); compare Section 4.1]:

(2.4.2.15) qo :=-A.o/2; Uo :=dol2;

for k := 1, 2, ... , n do begin Pk := Ji.kqk-l

+

2;

qk:= -A.k/Pk;

Uk:= (dk - Ji.kUk-l)/Pk end;

Mn :=Un ;

for k := n - 1, n - 2, ... ,

°

do

Mk :=qkMk+ 1

+

Uk;

[It can be shown that Pk > 0, so that (2.4.2.15) is well defined; see Exercise 25.] The linear system (2.4.2.11) can be solved in a similar, but not as straightforward, fashion. An ALGOL program by C. Reinsch can be found in Bulirsch and Rutishauser (1968).

The reader can find more details in Greville (1969) and de Boor (1972),

ALGOL programs in Herriot and Reinsch (1971), and FORTRAN programs in de Boor (1978). These references also contain information and algorithms for the higher spline functions S'; , m ~ 2, which are piecewise polynomial of degree 2m

+

1, and other generalizations.

2.4.3 Convergence Properties of Spline Functions

Interpolating polynomials may not converge to a function

f

whose values they interpolate, even if the partitions L\ are chosen arbitrarily fine (see Section 2.1.4). In contrast, we will show in this section that, under mild conditions on the function

f

and the partitions L\, the interpolating spline functions do converge towards

f

as the fineness of the underlying partitions approaches zero.

We will show first that the moments (2.4.2.1) of the interpolating spline function converge to the second derivatives of the given function. More

precisely, consider a fixed partition

Ll

= {a = Xo < Xl < ... < Xn = b} of [a, b], and let

be the vector of moments Mj of the spline function SL\(Y; . ) withf(xJ = Yj for j = 1, ... , n - 1, as well as

SL\(Y; b) = f'(b).

We are thus dealing with case (c) of (2.4.1.2). The vector M of moments satisfies the equation

AM=d,

which expresses the linear system of equations (2.4.2.9) in matrix form. The components dj of d are given by (2.4.2.6) and (2.4.2.8). Let F and r be the vectors

[

f"(xo)

1

f"(xd

F:= . ,

f"(xn)

r := d - AF = A(M - F).

Writing

Ilzll

:= maxi

I zd

for vectors

z,

and

IILlII

for the fineness (2.4.3.1)

IILlII

:=max IXj+l - Xjl

j

of the partition

Ll,

we have:

(2.4.3.2) Iff E C4[ a, b] and

I

f(4)(x)

I :::;;

L for x E [a, b], then

11M - FII

~

Ilrll

~

iLIILlI1

2 .

PROOF. By definition, ro = do - 2f"(xo) - f"(xd, and by (2.4.2.8), 6 (Yl - Yo ,)

2~r"(

) f"( ) ro = hI hI - Yo -" Xo - Xl·

Using Taylor's theorem to express Yl = f(x1 ) and f"(x1 ) in terms of the value and the derivatives of the function fat Xo yields

6 [f'( ) hI f"( )

hi

f"'( )

hi

f(4)( ) f'( )]

ro = hI Xo

+ 2

Xo

+ 6

Xo

+

24 <I - Xo

- 2f"(xo) - [f"(xo)

+

hd"'(xo)

+ ~i

f(4)«2)]

_ hi

f(4)( )

hi

f(4)(· )

- 4

<I -

2

<2

with '1' '2 E [XO,

xd.

Therefore

I

ro

I

~ ~LIIi\112.

Analogously, we find for

that

For the remaining components of r = d - AF, we find similarly rj = dj - I1j f"(Xj-

d -

2f"(xj) - Aj f"(x j+

d

= 6 [Yj+1 - Yj _ Yj - Yj-1) hj

+

hj+1 hj+1 hj

hj /"( ) 211'1I() hj+ 1 /"( )

h h Xj-1 -" Xj - h h Xj+ I .

j+ j+1 j+ j+1

Taylor's formula at Xj then gives

r. = 1

16

[f'(X') + hj+1 f"(x-) + hJ+I f"'(x.) + hJ+I j<4)( )

J h j

+

h j + I 1 J 2 J 6 J 24 ' I

- f'(Xj)

+ ~

f"(Xj) - hI f"'(Xj)

+ ~!

j<4)('2)

1

- hj [f"(X j) - hj f"'(Xj)

+ 1

j<4)('3)

1

- 2f"(xJ(hj

+

hj+l )

h [/"() h /"'() hJ+ I /(4)( )) 1

- j+ I Xj + j+ I Xj + -2- '4

I

=

hj+1hj+1 [hJ;1 j<4)(,d+ 1/(4)('2)

-1

j<4)('3)-hJ;1 j<4)('4))' Here

'1> ... ,

'4 E [Xj-I, Xj+l]. Therefore

for j = 1, 2, ... , n - 1. In sum,

Ilrll

~

iLIIi\112

and since r = A(M - F), (2.4.2.14) implies

11M - FII

~

Ilrll. o

(2.4.3.3) Theorem. Supposef E C4[a, b] and

I

j<4)(X)

I

~ Lfor x E [a, b]. Let ~

be a partition ~ = {a = Xo < ... < xn = b} of the interval [a, b], and K a constant such that

II~II ~

K

fi . -

0 1

I Xj+i - Xj I '"

or } - , ... , n - .

IfS!J.. is the spline function which interpolates the values of the functionf at the knots xo, ... , Xn E ~ and satisfies S~(x) = f'(x) for x = a, b, then there exist constants Ck ~ 2, which do not depend on the partition ~, such that for x E [a, b],

k = 0, 1,2,3.

Note that the constant K ?: 1 bounds the deviation of the partition ~ from uniformity.

PROOF. We prove the proposition first for k = 3. For

x

E

[xj-i, xJ,

S~(x)

- f'"(X) = Mj - Mj- 1 - J"'(x) hj

Mj - J"(Xj) _ Mj- 1 - J"(Xj-

d

+

J"(xJ - J"(x) - I~"(Xj-d - J"(x)] - f'"(X).

)

Using (2.4.3.2) and Taylor's theorem at x, we conclude that

IS;(x)-f'''(x)1

~!LII~I.12 +~I(Xj_X)f'''(X)+(Xj;x)2

j<4)('1d

) )

- (Xj-l - X)f'"(X) - (Xj-12- X)2 j<4)('12) - hj f'"(X)

I

3 11~112 L 11~112

~ 2L -h.-

+

2-h.-' '11' '12 E [X j- 1, Xj]

) )

By hypothesis, II~II/hj ~ K for every j. Thus

I

J"(x) - S~(x)1 ~ 2LKII~II·

To prove the proposition for k = 2, we observe: for each x E (a, b) there exists a closest knot

Xj

= xj(x), for which

I.x)x) - x I

~ tll~ll. From

J"(x) -

S~(x)

= J"(xAx» -

S~(xAx» + (

(flll(t) -

S~(t»

dt,

'Xj(X)

and since K ?: 1,

I

J"(x) - S~(x)

I

~ iLII~112

+

tll~ll

.

2LKII~11

X E [a, b].

We consider k

=

1 next. In addition to the boundary points ~o:= a,

~"+ 1 := b, there exist, by Rolle's theorem, n further points ~j E (xj - I , Xj),

j = 1, ... , n, with

j

=

0, 1, ... , n

+

1.

For any x E [a, b] there exists a closest one ofthe above points ~j

=

~Ax), for which consequently

Thus

f'(X) -

S~(x) = r

(f"(t) -

S~(t))

dt,

~j(X)

and

X E [a, b].

The case k

= °

remains. Since

f(x) -

S,l(x)

= ( (f'(t) -

S~(t))

dt,

'Xj(X)

it follows from the above result for k = 1 that

I

f(x) -

S,l(x) I

~ ±LKII~113

.

tll~11 = lLKII~114, xE[a,b],5 D Clearly, (2.4.3.3) implies that for sequences

~ m O l

=

{a

= x(m)

<

x(m)

< ... <

x(m) =

" m ' b} m

=

0,1, ... , of partitions with ~m -+

°

and

II~mll

sup (m) _ !m) ~ K < +00,

m.j X j+1 Xj

the corresponding spline functions

S,l ..

and their first three derivatives con- verge to

f

and its corresponding derivatives uniformly on [a, b]. Note that even the third derivative j'" is uniformly approximated by S',;. .. , a usually discontinuous sequence of step functions.

5 The estimates of Theorem (2.4.3.3) have been improved by Hall and Meyer (1976).

EXERCISES FOR CHAPTER 2

1. Let Li(X) be the Lagrange polynomials (2.1.1.3) for pairwise different support abscissas Xo, ... , x., and let CI := Li(O). Show that

f

CiX {

=1 ~

1=0 (-I)"XOXI"'X. forj=n+l;

for j

=

0,

(a) for j

=

1, 2, ... , n,

(b)

L

Li(X)

=

1.

i=O

2. Interpolate the function In x by a quadratic polynomial at x

=

10, 11, 12.

(a) Estimate the error committed for x

=

11.1 when approximating In x by the interpolating polynomial.

(b) How does the sign of the error depend on x?

3. Consider a function f which is twice continuously differentiable on the interval I

=

[-1, 1]. Interpolate the function by a linear polynomial through the support points (Xi, f(Xi», i

=

0, 1, Xo, Xl E I. Verify that

IX =

t

max

I

f"(e)

I

max

I

(x - xo)(x - xdl

~el xel

is an upper bound for the maximal absolute interpolation error on the interval I.

Which values xo, Xl minimize IX? What is the connection between (x - xo)x (x - xd and cos(2 arccos x)?

4. Suppose a function f(x) is interpolated on the interval [a, b] by a polynomial p.(x) whose degree does not exceed n. Suppose further thatfis arbitrarily often differentiable on [a, b] and that there exists M such that

I

f{i)(x)

I

~ M for i

=

0, 1, 2, ... and any x E [a, b]. Can it be shown, without additional hypotheses about the location of the support abscissas Xi E [a, b], that p.(x) converges uniformly on [a, b] tof(x) as n-+ oo?

5. (a) The Bessel function of order zero, 1 "

J o(x) = - f cos(x sin t) dt,

7t Jo

is to be tabulated at equidistant arguments .l.>i = Xo + ih, i

=

0, 1,2, .... How small must the increment h be chosen so that the interpolation error remains below 10 -6 if linear interpolation is used?

(b) What is the behavior of the maximal interpolation error max IP.(x) -Jo(x)/

O~x:Sl

as n -+ 00, if p.(x) interpolates J o(x) at x = xl') := i/n, i = 0, 1, ... , n?

Hint: It suffices to show that /J~)(x)

I

~ 1 for k = 0, 1, ....

(c) Compare the above result with the behavior of the error max IS&,(x) - Jo(x)1

O:sxs.l

as n -+ 00, where S&. is the interpolating spline function with knot set A. = {xl')} and S~(x) = Jo(x) for x = 0, 1.

6. Interpolation on product spaces: Suppose every linear interpolation problem stated in terms of functions qJo, qJt. ... , qJ. has a unique solution

"

<I>(x) =

I

(XiqJi(X)

i=O

with <I>(Xk) = ik, k = 0, ... , n, for prescribed support arguments Xo, ... x. with Xi =1= X j, i =1= j. Show the following: If !/I 0, ... , !/1m is also a set of functions for which every linear interpolation problem has a unique solution, then for every choice of abscissas

Xo, Xl, ... , Xn , Xi =1= X j, i =1= j,

and support ordinates

fik , i = 0, ... , n, k = 0, ... , m, there exists a unique function of the form

m

<I>(X, Y) =

I I

(XVllqJV(X)!/IIl(Y)

v=O 1l=0

with <I>(Xi, Yk) = fik, i = 0, 1, .. :' n, k = 0, ... , m.

7. Specialize the general result of Exercise 6 to interpolation by polynomials. Give the explicit form of the function <I>(x, y) in this case.

8. Given the abscissas

and, for each k = 0, ... , m, the values

and support ordinates

fik , i = 0, ... , nk, k = 0, ... , m,

suppose without loss of generality that the Yk are numbered in such a fashion that

Prove by induction over m that exactly one polynomial P(X, y) == 1l=0 v=o

t I

(XVIlXV

exists with

i = 0, ... , nk, k = 0, ... , m.

9. Is it possible to solve the interpolation problem of Exercise 8 by other polynomials

M N.

P(x, y) =

I L

(XVIlXV,

1l=0 v=o

requiring only that the number of parameters (J.vp agree with the number of support points, that is,

m M

I(np +1)= I(Np+1)?

p=o p=o

Hint: Study a few simple examples.

10. Calculate the inverse and reciprocal differences for the support points

Xi 0 -1 2 -2

/; 3

t

3

t

and use them to determine the rational expression 11>2. 2(X) whose numerator and denominator are quadratic polynomials and for which 11>2. 2(Xi) = /;, first in continued-fraction form and then as the ratio of polynomials.

11. Let II>m .• be the rational function which solves the system

sm.

for given support points (Xk,Jk), k = 0,1, ... , m + n:

(ao + al Xk + ... + limxk') - Jk(bo + bl Xk + ... + b.x~)

=

0, k = 0, 1, ... , m + n.

Show that ~"(x) can be represented as follows by determinants:

~"(x) = IJk, Xk - X, ... , (Xk - xr, (Xk - x)Jk, ... , (Xk - xfJkIk':O'

11,

Xk - X, ... , (Xk - xr, (Xk - x)ft. ... , (Xk - xfJkIk':O' . Here the following abbreviation has been used:

[ (J.o

l(J.k, ... , (klk':O' :=det

7

cx'" +"

12. Generalize Theorem (2.3.1.11):

(a) For 2n + 1 support abscissas Xk with

a :!S; Xo < Xl < ... < X2. < a + 21t

and support ordinates Yo, ... , Y2., there exists a unique trigonometric polynomial

with

T(x) =

tao

+

I

(aj cos jx + bj sin jx)

j=l

T(Xk) = Yk for k = 0, 1, ... , 2n.

(b) If Yo, ... , Y2. are real numbers, then so are the coefficients aj, bj.

Hint: Reduce the interpolation by trigonometric polynomials in (a) to (complex) interpolation by polynomials using the transformation T(x) =

D= -.

Cj eij". Then show C _ j = Cj to establish (b).

13. (a) Show that, for real Xl> .•. , X2., the function

n

2. X-Xk

t(x) = sin -2-

k=l

is a trigonometric polynomial

tao

+

I

(aj cos jx + bj sin jx)

j=l

with real coefficients a j, b j .

Hint: Substitute sin cp = (1/2i)(eilP - e-ilP ).

(b) Prove, using (a) that the interpolating trigonometric polynomial with sup- port abscissas Xk,

a :::; Xo < Xl ... < X2. < a + 2n, and support ordinates Yo, ... , Y2. is identical with

2.

T(x) =

I

yjtAx),

j=O

where

2. .

X - Xk

/2. .

X j - Xk

tAx)'=

n

sm - -

n

sm - - .

k=O 2 k=O 2

k~j k~j

14. Show that for n + 1 support abscissas Xk with

a::::;; Xo < Xl < ... < X. < a + n

and support ordinates Yo, .•. , Y., a unique" cosine polynomial"

C(x) =

I

aj cosjx

j=O

exists with C(Xk)

=

Yk, k

=

0, 1, ... , n.

Hint: See Exercise 12.

15. (a) Show that for any integer j

with

and

2m

I

COSjXk

=

(2m + l)h(j),

k=O

2m

I

sinjxk = 0,

k=O

Xk'=---2nk

2m + l' k=0,1, ... ,2m,

h(j) ,=

10

1 for j

= °

mod 2m + 1

\ otherwise.

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 103-125)