• Tidak ada hasil yang ditemukan

Hermite-Interpolation

Dalam dokumen 4 Systems of Linear Equations 190 (Halaman 61-69)

2.1 Interpolation by Polynomials

2.1.5 Hermite-Interpolation

We end this section with two brief warnings, one against trusting the interpolating polynomial outside ofI[x0, . . . , xn], and one against expecting too much of polynomial interpolation insideI[x0, . . . , xn].

In the exterior of the intervalI[x0, . . . , xn], the value ofω(x)in The- orem (2.1.4.1) grows very fast. The use of the interpolation polynomial P for approximatingf at some location outsinde the intervalI[x0, . . . , xn] — calledextrapolation — should be avoided if possible.

WithinI[x0, . . . , xn] on the other hand, it should not be assumed that finer and finer samplings of the function f will lead to better and better approximations through interpolation.

Consider a real function f which is infinitely often differentiable in a given interval [a, b]. To every interval partition = {a = x0 < x1 <

· · · < xn = b} there exists an interpolating polynomial P Πn with P(xi) =fi forxi∈∆. A sequence of interval partitions

m=(

a=x(m)0 < x(m)1 < . . . < x(m)n

m =b)

gives rise to a sequence of interpolating polynomialsPm. One might expect the polynomialsPm to converge towardf if the fineness

m := max

i x(m)i+1−x(m)i

of the partitions tends to 0 as m → ∞. In general this is not true. For example, it has been shown for the functions

f(x) := 1

1 +x2, [a, b] = [5,5], and

f(x) :=

x, [a, b] = [0,1],

that the polynomials Pm do not converge pointwise tof for arbitrarily fine uniform partitionsm, x(m)i =a+i(b−a)/m,i= 0, . . .,m.

(2.1.5.1) P(k)(xi) =fi(k), i= 0,1, . . . , m k= 0,1, . . . , ni1.

This problem differs from the usual interpolation problem for polynomials in that it prescribes at each support abscissa xi not only the value but also the firstni1 derivatives of the desired polynomial. The polynomial interpolation of Section 2.1.1 is the special caseni= 1,i= 0, 1, . . . , m.

There are exactly "

ni = n+ 1 conditions (2.1.5.1) for the n+ 1 coefficients of the interpolating polynomial, leading us to expect that the Hermite interpolation problem can be solved uniquely:

(2.1.5.2) Theorem.For arbitrary numbers x0< x1< . . . < xmandfi(k), i= 0,1,. . .,m,k= 0,1,. . .,ni1, there exists precisely one polynomial

P ∈Πn, n+ 1 :=

m i=0

ni, which satisfies(2.1.5.1).

Proof. We first show uniqueness. Consider the difference polynomial Q(x) :=P1(x)−P2(x) of two polynomialsP1,P2∈Πn for which (2.1.5.1) holds. Since

Q(k)(xi) = 0, k= 0,1, . . . , ni1, i= 0,1, . . . , m , xiis at least anni-fold root ofQ, so thatQhas altogether"

ni=n+1 roots each counted according to its multiplicity. ThusQmust vanish identically, since its degree is less thann+ 1.

Existence is a consequence of uniqueness: For (2.1.5.1) is a system of linear equations fornunknown coefficientscjofP(x) =c0+c1x+· · ·+cnxn. The matrix of this system is not singular, because of the uniqueness of its solutions. Hence the linear system (2.1.5.1) has a unique solution for

arbitrary right-hand sidesfi(k).

Hermite interpolating polynomials can be given explicitly in a form analogous to the interpolation formula of Lagrange (2.1.1.4). The polyno- mialP ∈Πn given by

(2.1.5.3) P(x) =

m i=0

ni1 k=0

fi(k)Lik(x).

satisfies (2.1.5.1). The polynomialsLik∈Πnaregeneralized Lagrange poly- nomials.They are defined as follows: Starting with the auxiliary polynomi- als

lik(x) := (x−xi)k k!

!m

j=0j=i

x−xj xi−xj

nj

, 0≤i≤m, 0≤k≤ni,

[compare (2.1.1.3)], put

Li,ni1(x) := li,ni1(x), i= 0,1, . . . , m, and recursively fork=ni2,ni3,. . ., 0,

Lik(x) := lik(x) ni1

ν=k+1

l(ν)ik (xi)L(x). By induction

L(σ)ik (xj) =

1, ifi=j and k=σ, 0, otherwise.

ThusP in (2.1.5.3) is indeed the desired Hermite interpolating polynomial.

An alternative way to describe Hermite interpolation is important for Newton- and Neville-type algorithms to construct the Hermite interpo- lating polynomial, and, in particular, for developing the theory of B- splines [Section 2.4.4]. The approach is to generalize divided differences f[x0, x1, . . . , xk] [see Section 2.1.3] so as to accommodate repeated abscis- sae. To this end, we expand the sequence of abscissae x0< x1<· · ·< xm occuring in (2.1.5.1) by replacing eachxibyni copies of itself:

x0=· · ·=x0

n0

< x1=· · ·=x1

n1

<· · ·< xm=· · ·=xm

nm

.

Then+ 1 ="m

i=0ni elements in this sequence are then renamed t0=x0≤t1≤ · · · ≤tn=xm,

and will be referred to asvirtual abscissae.

We now wish to reformulate the Hermite interpolation problem (2.1.5.1) in terms of the virtual abscissae, without reference to the numbers xi and ni,i= 0, . . .,m. Clearly, this is possible since the virtual abscissae deter- mine the “true” abscissaexi and the integersni,i= 0, 1,. . .,m. In order to stress the dependence of the Hermite interpolant P(.) ont0, t1,. . ., tn we writeP01...n(.) forP(.). Also it will be convenient to writef(r)(xi) in- stead offi(r)in (2.1.5.1). The interpolantP01...n is uniquely defined by the n+ 1 ="m

i=0ni interpolation conditions (2.1.5.1), which are as many as there are index pairs (i, k) withi= 0, 1,. . .,m,k= 0, 1,. . .,ni1, and are as many as there are virtual abscissae. An essential observation is that the interpolation conditions in (2.1.5.1) belonging to the following linear ordering of the index pairs (i, k),

(0,0),(0,1), . . . , (0, n01),(1,0), . . . ,(1, n11), . . . , (m, nm1), have the form

(2.1.5.4) P01...n(sj1)(tj) =f(sj1)(tj), j= 0,1, . . . , n,

if we define sj, j = 0, 1, . . ., n, as the number of times the value of tj occurs in the subsequence

t0≤t1≤ · · · ≤tj.

The equivalence of (2.1.5.1) and (2.1.5.4) follows directly from x0=t0=t1=· · ·=tn01< x1=tn0 =· · ·=tn0+n11<· · ·, and the definition of thesj,

s0= 1, s1= 2, . . . , sn01=n0; sn0= 1, . . . , sn0+n11=n1; . . . . We now use this new formulation in order to express the existence and uniqueness result of Theorem (2.1.5.2) algebraically. Note that any polynomialP(t)∈Πn can be written in the form

P(t) = n j=0

bjtj

j! =Π(t)b, b:= [b0, b1, . . . , bn]T, whereΠ(t) is the row vector

Π(t) :=

1, t, . . . , tn n!

.

Therefore by Theorem (2.1.5.2) and (2.1.5.4), the following system of linear equations

Π(sj1)(tj)b=f(sj1)(tj), j= 0,1, . . . , n,

has a unique solutionb. This proves the following corollary, which is equiv- alent to Theorem (2.1.5.2):

(2.1.5.5) Corollary.For any nondecreasing finite sequence t0≤t1≤ · · · ≤tn

of n+ 1 real numbers, the(n+ 1)×(n+ 1)matrix

Vn(t0, t1, . . . , tn) :=



Π(s01)(t0) Π(s11)(t1)

. . . Π(sn1)(tn)



is nonsingular.

The matrixVn(t0, t1, . . . , tn) is related to the well-known Vandermonde matrix if the numberstj are distinct (thensj= 1 for allj):

Vn(t0, t1, . . . , tn) =



1 t0 . . . tn0 ... ... ... 1 tn . . . tnn





 1!

2!

. .. n!



1

.

Example 1.Fort0=t1< t2, one finds

V2(t0, t1, t2) =





1 t0 t20 2 0 1 t1 1 t2 t22 2



.

In preparation for a Neville type algorithm for Hermite interpolation, we associate with each segment

ti≤ti+1≤ · · · ≤ti+k, 0≤i≤i+k≤n,

of virtual abscissae the solution Pi,i+1,...,i+k ∈Πk of the partial Hermite interpolation problem belonging to this subsequence, that is the solution of [see (2.1.5.4)]

Pi,i+1,...i+k(sj1) (tj) =f(sj1)(tj), j =i, i+ 1, . . . , i+k.

Here of course, the integerssj, i≤j ≤i+k, are defined with respect to the subsequence, that is sj is the number of times the value oftj occurs within the sequenceti,ti+1,. . .,tj.

Example 2.Supposen0= 2,n1= 3 and

x0= 0, f0(0)=1, f0(1)=2,

x1= 1, f1(0)= 0, f1(1)= 10, f1(2)= 40.

This leads to virtual abscissaetj,j= 0, 1,. . ., 4, with t0=t1:=x0= 0, t2=t3=t4:=x1= 1.

For the subsequencet1≤t2≤t3, that isi= 1 andk= 2, one has t1=x0< t2=t3=x1, s1=s2= 1, s3= 2.

It is associated with the interpolating polynomialP123(x)∈Π2, satisfying P123(s11)(t1) =P123(0) =f(s11)(t1) =f(0)(0) =1, P123(s21)(t2) =P123(1) =f(s21)(t2) =f(0)(1) = 0, P123(s31)(t3) =P123 (1) =f(s31)(t3) =f(1)(1) = 10.

This illustrates (2.1.5.4).

Using this notation the following analogs to Neville’s formulae (2.1.2.1) hold: Ifti=ti+1=· · ·=ti+k =xl then

(2.1.5.6a) Pi,i+1,...,i+k(x) = k r=0

fl(r)

r! (x−xl)r, and ifti< ti+k

(2.1.5.6b)

Pi,i+1,...,i+k(x) = (x−ti)Pi+1,...,i+k(x)(x−ti+k)Pi,i+1,...,i+k−1(x)

ti+k−ti .

The first of these formulae follows directly from definition (2.1.5.4); the sec- ond is proved in the same way as its counterpart (2.1.2.1b): One verifies first that the right hand side of (2.1.5.6b) satisfies the same uniquely solvable (Theorem (2.1.5.2)) interpolation conditions (2.1.5.4) as Pi,i+1,...,i+k(x).

The details of the proof are left to the reader.

In analogy to (2.1.3.2), we now define thegeneralized divided difference f[ti, ti+1, . . . , ti+k]

as the coefficient of xk in the polynomial Pi,i+1,...,i+k(x) Πk. By com- paring the coefficients ofxk in (2.1.5.6) [cf. (2.1.3.5)] we find, ifti=· · ·= ti+k=xl,

(2.1.5.7a) f[ti, ti+1, . . . , ti+k] = 1 k!fl(k), and ifti< ti+k

(2.1.5.7b) f[ti, ti+1, . . . , ti+k] = f[ti+1, . . . , ti+k]−f[ti, ti+1, . . . , ti+k−1]

ti+k−ti .

By means of the generalized divided differences ak :=f[t0, t1, . . . , tk], k= 0,1, . . . , n,

the solutionP(x) =P01···n(x)∈Πn of the Hermite interpolation problem (2.1.5.1) can be represented explicitly in its Newton form [cf. (2.1.3.1)]

(2.1.5.8) P01···n(x) =a0+a1(x−t0) +a2(x−t0)(x−t1) +· · · +an(x−t0)(x−t1)· · ·(x−tn−1).

This follows from the observation that the difference polynomial Q(x) :=P01···n(x)−P01···(n−1)(x) =f[t0, t1, . . . , tn]xn+· · · is of degree at mostnand has, because of (2.1.5.1) and (2.1.5.4), fori≤m− 1, the abscissaxi as zero of multiplicityni, andxmas zero of multiplicity (nm1). The polynomial of degreen

(x−t0)(x−t1)· · ·(x−tn−1), has the same zeros with the same multiplicities. Hence

Q(x) =f[t0, t1, . . . , tn](x−t0)(x−t1)· · ·(x−tn−1),

which proves (2.1.5.8).

Example 3.We illustrate the calculation of the generalized divided differences with the data of Example 2 (m= 1,n0 = 2,n1 = 3). The following difference scheme results:

t0= 0 1=f[t0]

2=f[t0, t1]

t1= 0 1=f[t1] 3 =f[t0, t1, t2]

1 =f[t1, t2] 6 =f[t0, . . . , t3]

t2= 1 0=f[t2] 9 =f[t1, t2, t3] 5 =f[t0, . . . , t4] 10=f[t2, t3] 11 =f[t1, . . . , t4]

t3= 1 0=f[t3] 20=f[t2, t3, t4] 10=f[t3, t4]

t4= 1 0=f[t4]

The entries markedhave been calculated using (2.1.5.7a) rather than (2.1.5.7b).

The coefficients of the Hermite interpolating polynomial can be found in the upper diagonal of the difference scheme:

P(x) =12(x−0) + 3(x−0)(x−0) + 6(x−0)(x−0)(x−1) + 5(x−0)(x−0)(x−1)(x−1)

=12x+ 3x2+ 6x2(x−1) + 5x2(x−1)2.

The interpolation error incurred by Hermite interpolation can be esti- mated in the same fashion as for the usual interpolation by polynomials.

In particular, the proof of the following theorem is entirely analogous to the proof of Theorem (2.1.4.1):

(2.1.5.9) Theorem. Let the real functionf be n+ 1 times differentiable on the interval[a, b], and considerm+ 1support abscissaexi[a, b],

x0< x1<· · ·< xm. If the polynomial P(x)is of degree at mostn,

m i=0

ni=n+ 1, and satisfies the interpolation conditions

P(k)(xi) =f(k)(xi), i= 0,1, . . . , m, k= 0, 1, . . . , ni1,

then for every x¯[a, b]existsξ¯∈I[x0, . . . , xm,xsuch that fx)−Px) = ωx)f(n+1)( ¯ξ)

(n+ 1)! , where

ω(x) := (x−x0)n0(x−x1)n1. . .(x−xm)nm.

Hermite interpolation is frequently used to approximate a given real functionf by a piecewise polynomial functionϕ. Given a partition

:a=x0< x1<· · ·< xm=b

of an interval [a, b], the corresponding Hermite function space H(ν) is de- fined as consisting of all functions ϕ: [a, b]IR with the following prop- erties:

(2.1.5.10) (a) ϕ∈Cν−1[a, b]:The(ν−1)st derivative ofϕexists and is continuous on [a, b].

(b) ϕ|Ii Π2ν−1: On each subinterval Ii := [xi, xi+1], i = 0, 1, . . ., m−1, ϕ agrees with a polynomial of degree at most 2ν−1.

Thus the functionϕconsists of polynomial pieces of degree 2ν−1 or less which areν−1 times differentiable at the “knots”xi. In order to approxi- mate a given real functionf ∈Cν−1[a, b] by a functionϕ∈H(ν), we choose the component polynomialsPi=ϕ|Iiofϕso thatPi∈Π2ν−1 and so that the Hermite interpolation conditions

Pi(k)(xi) =f(k)(xi), Pi(k)(xi+1) =f(k)(xi+1), k= 0,1, . . . , ν−1, are satisfied.

Under the more stringent condition f C2ν[a, b], Theorem (2.1.5.9) provides a bound to the interpolation error forx∈Ii, which arises if the component polynomialPi replacesf:

(2.1.5.11)

f(x)−Pi(x) (x−xi)(x−xi+1)ν

(2ν)! max

ξ∈Ii

f(2ν)(ξ)

xi+1−xi2ν 22ν(2ν!) max

ξ∈Ii

f(2ν)(ξ).

Combining these results fori= 0, 1,. . .,mgives for the functionϕ∈H(ν), which was defined earlier,

(2.1.5.12) **f−ϕ**

:= max

x∈[a,b]f(x)−ϕ(x) 1

22ν(2ν)!**f(2ν)**

2ν where

= max

0≤i≤m−1|xi+1−xi| is the “fineness” of the partition.

The approximation error goes to zero with the 2ν th power of the fineness j if we consider a sequence of partitions j of the interval [a, b] with j0. Contrast this with the case of ordinary polynomial interpolation, where the approximation error does not necessarily go to zero asj0 [see Section 2.1.4].

Ciarlet, Schultz, and Varga (1967) were able to show also that the firstν derivatives ofϕare a good approximation to the corresponding derivatives off:

(2.1.5.13)

f(k)(x)−Pi(k)(x) |(x−xi)(x−xi+1)|ν−k

k!(2ν−2k)! (xi+1−xi)kmax

ξ∈Ii

f(2ν)(ξ)

for allx∈Ii,i= 0, 1,. . .,m−1,k= 0, 1,. . .,ν, and therefore (2.1.5.14) **f(k)−ϕ(k)**

2ν−k

22ν−2kk! (2ν−2k)!**f(2ν)**

fork= 0, 1,. . .,ν.

Dalam dokumen 4 Systems of Linear Equations 190 (Halaman 61-69)