2.4 Interpolation by Spline Functions
2.4.6 Multi-Resolution Methods and B-Splines
A=
B1(ξ1) . . . BN(ξ1)
... ...
B1(ξN) . . . BN(ξN)
has a special structure:A is a band matrix, because by (2.4.4.5) the func- tionsBi(x) =Bi,r,t(x) have support [ti, ti+r], so that within thejth row of Aall elementsBi(ξj) withti+r< ξjorti> ξj are zero. Therefore, each row ofA contains at most relements different from 0, and these elements are in consecutive positions. The componentsBi(ξj) ofAcan be computed by the recursion described previously. The system (2.4.5.6), and thereby the interpolation problem, is uniquely solvable if A is nonsingular. The non- singularity ofAcan be checked by means of the following simple criterion due to Schoenberg and Whitney (1953), which is quoted without proof.
(2.4.5.7) Theorem.The matrix A= (Bi(ξj))of (2.4.5.6) is nonsingular if and only if all its diagonal elementsBi(ξi)= 0are nonzero.
It is possible to show [see Karlin (1968)] that the matrix A is totally positive in the following sense: allr×rsubmatricesB ofAof the form
B= (aip,jq)rp,q=1 with r≥1, i1< i2<· · ·< ir, j1< j2<· · ·< jr, have a nonnegative determinant, det(B) ≥ 0. As a consequence, solving (2.4.5.6) for nonsingular A by Gaussian elimination without pivoting is numerically stable [see de Boor and Pinkus (1977)]. Also the band structure ofAcan be exploited for additional savings.
For further properties of B-Splines, their applications, and algorithms the reader is referred to the literature, in particular to de Boor (1978), where one can also find numerousfortranprograms.
(S1) Vj⊂Vj+1, j∈ZZ,
(S2) f(x)∈Vj ⇔f(2x)∈Vj+1, j∈ZZ, (S3) f(x)∈Vj⇔f(x+ 2−j)∈Vj, j∈ZZ,
(S4) 6
j∈ZZ
Vj =L2(IR),
(S5) 7
j∈ZZ
Vj={0}.
Usually, such subspaces are generated by a function Φ(x) ∈ L2(IR) as follows: By translation and dilatation of Φ, we first define additional functions (we call them “derivates” ofΦ)
Φj,k(x) := 2j/2Φ(2jx−k), j, k∈ZZ.
The spaceVjis then defined as the closed linear subspace ofL2(IR) that is generated by the functionsΦj,k,k∈ZZ,
Vj := span{Φj,k|k∈ZZ}=(
|k|≤n
ckΦj,k|ck∈C, n≥0) . Then (S2) and (S3) are clearly satisfied.
(2.4.6.1) Definition. The functionΦ∈L2(IR)is called scaling function, if the associated spacesVjsatisfy conditions(S1)–(S5)and if, in addition, the functions Φ0,k, k ∈ ZZ, form a Riesz-basis of V0, that is, if there are positive constants 0< A≤B with
A
k∈ZZ
|ck|2≤
k∈ZZ
ckΦ0,k2≤B
k∈ZZ
|ck|2
for all sequences(ck)k∈ZZ ∈2(then also for eachj∈ZZ, the functionsΦj,k, k∈ZZ, will form a Riesz-basis ofVj).
IfΦis a scaling function then, for allj ∈ZZ, each functionf ∈Vj can be written as a convergent (inL2(IR)) series
(2.4.6.2) f =
k∈ZZ
cj,kΦj,k
with uniquely determined coefficientscj,k,k∈ZZ, with (cj,k)k∈ZZ ∈2. Also for a scaling functionΦ, condition (S1) is equivalent to the con- dition thatΦsatisfies a so-called two-scale-relation
(2.4.6.3) Φ(x) =
k∈ZZ
hkΦ1,k(x) =√
2
k∈ZZ
hkΦ(2x−k)
with coefficients (hk)∈2. For, (2.4.6.3) is equivalent toΦ∈V0⊂V1. But then alsoVj⊂Vj+1 for allj∈ZZbecause of
Φj,l(x) =2j/2Φ(2jx−l) = 2j/2√
2
k∈ZZ
hkΦ(2j+1x−2l−k)
=
k∈ZZ
hkΦj+1,k+2l(x).
Example.A particularly simple scaling function is theHaar-function (2.4.6.4) Φ(x) := 1, for 0≤x <1,
0, otherwise.
The associated two-scale-relation is
Φ(x) =Φ(2x) +Φ(2x−1).
See Figure 3 for an illustration of some of its derivatesΦj,k.
...... .......................
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Φj,k(x)
... x
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
....................................... ......................................................................................................................................................... ........................ .........................................................................................................
√2
1
−3/2 −1 0 1/2 1 4/2 2.5
Φ1,−3 Φ1,0
Φ0,0=Φ
Φ1,4
Fig. 3.Some derivatesΦj,k of the Haar-function
Scaling functions, and the associated spaces Vj they generate, play a central role in modern signal- and image processing methods. For example, any function f ∈ Vj ="
kcj,kΦj,k can be considered as a function with a finite resolution (e.g. a signal or, in two dimensions, a picture obtained by scanning) where only details of size 2−j are resolved (this is illustrated
most clearly by the spaceVj belonging to the Haar-function). A common application is to approximate a high resolution functionf, that is a function f ∈ Vj with j large, by a coarser function ˆf (say a ˆf ∈ Vk with k < j) without losing to much information. This is the purpose ofmulti-resolution methods, whose concepts we wish to explain in this section.
We first describe additional scaling functions based on splines. The Haar-function Φ is the simplest B-spline of order r = 1 with respect to the special sequence t:= (k)k∈ZZ of all integers: Definition (2.4.4.3) shows immediately
B0,1,t(x) =fx0[0,1] = (1−x)0+−(0−x)0+=Φ(x).
Moreover, for allk∈ZZ, their translates are
Bk,1,t(x) =fx0[k, k+ 1] =B0,1,t(x−k) =Φ(x−k) =Φ0,k. This suggests to consider general B-splines
(2.4.6.5) Mr(x) :=B0,r,t(x) =rfxr−1[0,1, . . . , r]
of arbitrary order r≥1 as candidates for scaling functions, because they also generate all B-splines
Bk,r,t(x) =rfxr−1[k, k+ 1, . . . , k+r], k∈ZZ,
by translation,Bk,r,t(x) =B0,r,t(x−k) =Mr(x−k). Advantages are that their smoothness increases withrsince, by definition,Mr(x) =B0,r,t(x) is r−2 times continuously differentiable, and that they have compact support [see Theorem (2.4.4.5)]. It can be shown that all Mr, r ≥ 1, are in fact scaling functions in the sense of Definition (2.4.6.1); for example, their two-scale-relation reads
Mr(x) = 2−r+1 r
l=0
r l
Mr(2x−l), x∈IR.
For proofs we refer the reader to the literature [see Chui (1992)]. In the interest of simplicity, we only consider low order instances of these functions and their properties.
Examples.For low orderrone finds the following B-splinesMr :=B0,r,t:M1(x) is the Haar-function (2.4.6.4), and forr= 2, 3 one obtains:
M2(x) =
x for 0≤x≤1, 2−x for 1≤x≤2, 0 otherwise.
M3(x) = 1 2
x2 for 0≤x≤1,
−2x2+ 6x−3 for 1≤x≤2, (3−x)2 for 2≤x≤3,
0 otherwise.
The translateH(x) :=M2(x+ 1) ofM2(x),
(2.4.6.6) H(x) = 1− |x| for−1≤x≤1 0 otherwise, is known ashat-function.
These formulas are special cases of the following representation ofMr(x) for generalr≥1:
(2.4.6.7) Mr(x) = 1 (r−1)!
r l=0
(−1)l r
l
(x−l)r+−1.
This is readily verified by induction overr, taking the recursion (k = 1, 2,. . ., r) (2.1.3.5) for divided differences
fxr−1[i, i+ 1, . . . , i+k] = 1 k
fxr−1[i+ 1, . . . , i+k]−fxr−1[i, . . . , i+k−1]
, andfxr−1(t) = (t−x)r+−1 into account.
We now describe multi-resolution methods in more detail. As already stated, their aim is to approximate, as well as possible, a given “high res- olution” function vj+1 ∈Vj+1 by a function of lower resolution, say by a functionvj∈Vj with
vj+1−vj= min
v∈Vjvj+1−v.
Sincevj+1−vjis then orthogonal toVj, andvj ∈Vj⊂Vj+1, the orthogonal subspaceWj ofVj inVj+1comes into play,
Wj :={w∈Vj+1| w, v!= 0 for allv∈Vj}, j∈ZZ.
We writeVj+1=Vj⊕Wj to indicate that everyvj+1∈Vj+1has a unique representation as a sumvj+1=vj+wj of two orthogonal functionsvj∈Vj andwj∈Wj.
The spaces Wj are mutually orthogonal, Wj ⊥ Wk for j = k (i.e.
v, w! = 0 for v ∈ Wj, w ∈ Wk). If, say j < k, this follows from Wj ⊂ Vj+1⊂Vk andWk ⊥Vk. More generally, (S4) and (S5) then imply
L2(IR) =8
j∈ZZ
Wj =· · · ⊕W−1⊕W0⊕W1⊕ · · · .
For a givenvj+1 ∈Vj+1, multi-resolution methods compute for m≤j the orthogonal decompositions vm+1 = vm+wm, vm ∈ Vm, wm ∈ Wm, according to the following scheme:
(2.4.6.8)
vj+1 → vj → vj−1 → · · ·
wj wj−1 · · ·
Then for m ≤ j, vm ∈ Vm is also the best approximation of vj+1 by an element ofVm, that is,
vj+1−vm= min
v∈Vmvj+1−v.
This is because the vectors wk are mutually orthogonal and wk ⊥Vm for k≥m. Hence for arbitraryv∈Vm, vm−v∈Vm and
vj+1−v=wj+wj−1+· · ·+wm+ (vm−v), so that
vj+1−v2=wj2+· · ·+wm2+vm−v2≥ vj+1−vm2. In order to compute the orthogonal decompositionvj+1=vj+wjof an arbitraryvj+1∈Vj+1, we need thedual function Φ˜of the scaling function Φ. As will be shown later, ˜Φis uniquely determined by the property (2.4.6.9) Φ0,k,Φ˜!=
# ∞
−∞
Φ(x−k) ˜Φ(x)dx=δk,0, k∈ZZ.
With the help of this dual function ˜Φ, we are able to compute the coeffi- cientsckof the representation (2.4.6.2) of functionf ∈V0,f ="
l∈ZZclΦ0,l, as scalar products:
f(x),Φ(x˜ −k)!=
l
cl
# ∞
−∞
Φ(x−l) ˜Φ(x−k)dx
=
l
cl
# ∞
−∞
Φ(x−l+k) ˜Φ(x)dx=ck.
Also, for anyj ∈ZZ, the functions ˜Φj,k(x) := 2j/2Φ(2˜ jx−k),k ∈ ZZ, satisfy
(2.4.6.10)
Φj,k,Φ˜j,l!= 2j
# ∞
−∞
Φ(2jx−k) ˜Φ(2jx−l)dx
=
# ∞
−∞
Φ(x−k+l) ˜Φ(x)dx
=δk−l,0, k, l∈ZZ.
The dual function ˜Φmay thus be used to compute also the coefficientscj,l of the seriesf ="
k∈ZZcj,kΦj,k representing a functionf ∈Vj: (2.4.6.11) f,Φ˜j,l!=
k
cj,k Φj,k,Φ˜j,l!=cj,l.
The following theorem ensures the existence of dual functions:
(2.4.6.12) Theorem. To each scaling functionΦthere exists exactly one dual functionΦ˜∈V0.
Proof. Since the Φ0,k, k ∈ ZZ, form a Riesz-basis of V0, the function Φ0,0=Φis not contained in the subspace
V := span{Φ0,k| |k| ≥1}
ofV0. Therefore, there exists exactly one elementg=Φ−u= 0 withu∈V, so that
g= min{ Φ−v |v∈V }.
uis the (unique!) orthogonal projection ofΦtoV, henceg⊥V, i.e. v, g!= 0 for allv∈V, and in particular,
Φ0,k, g!= 0 for all|k| ≥1,
and 0 <g2 = Φ−u, Φ−u!= Φ−u, g! = Φ, g!. The function ˜Φ:=
g/ Φ, g!has therefore the properties of a dual function.
Example.The Haar-function (2.4.6.4) is selfdual, ˜Φ=Φ. This follows from the orthonormality propertyΦ, Φ0,k=δk,0,k∈ZZ.
It is, furthermore, possible to compute the best approximation vj ∈ Vj of a given vj+1 ∈ Vj+1 using the dual function ˜Φ. We assume that vj+1∈Vj+1is given by its series representationvj+1="
k∈ZZcj+1,kΦj+1,k. Because of vj ∈ Vj, the functionvj we are looking for has the form vj =
"
k∈ZZcj,kΦj,k. Its coefficientscj,khave to be determined so thatvj+1−vj ⊥ Vj,
vj, v!= vj+1, v! for allv∈Vj.
Now, since ˜Φ ∈ V0, all functions ˜Φj,k, k ∈ ZZ, belong to Vj so that by (2.4.6.10)
cj,l = vj,Φ˜j,l!= vj+1,Φ˜j,l!
=
k∈ZZ
cj+1,k Φj+1,k,Φ˜j,l!
=
k∈ZZ
cj+1,k2j+1/2
# ∞
−∞
Φ(2j+1x−k) ˜Φ(2jx−l)dx
=
k∈ZZ
cj+1,k√ 2
# ∞
−∞
Φ(2x+ 2l−k) ˜Φ(x)dx.
This leads to the following method for computing the coefficientscj,k from the coefficients cj+1,k: First, we compute the numbers (they do not depend onj)
(2.4.6.13) γi:=√ 2
# ∞
−∞
Φ(2x+i) ˜Φ(x)dx, i∈ZZ. The coefficientscj,l are then obtained by means of the formula
(2.4.6.14) cj,l =
k∈ZZ
cj+1,kγ2l−k, l∈ZZ.
Example.Because of ˜Φ=Φfor the Haar-function, the numbersγiare given by γi:= 1/2 fori= 0,−1,
0 otherwise.
SinceΦhas compact support, only finitely manyγiare nonzero, so that also the sums (2.4.6.14) are finite in this case:
cj,l= 1
√2(cj+1,2l+cj+1,2l+1), l∈ZZ.
In general, the sums in (2.4.6.14) are infinite and thus have to be ap- proximated by finite sums (truncation after finitely many terms). The ef- ficiency of the method thus depends on how fast the numbersγi converge to 0 as|i| → ∞.
It is possible to iterate the method following the scheme (2.4.6.8). Note that the numbersγihave to be computed only once since they are indepen- dent ofj. In this way we obtain the followingmulti-resolution algorithmfor computing the coefficientscm,k, k∈ZZ, of the seriesvm="
k∈ZZcm,kΦm,k for allm≤j:
(2.4.6.15)Given: cj+1,k,k∈ZZ, andγi,i∈ZZ,s. (2.4.6.13).
Form=j,j−1,. . . forl∈ZZ
cm,l:="
k∈ZZcm+1,kγ2l−k.
We have seen that this algorithm is particularly simple for the Haar- function as scaling function.
We now consider scaling functions that are given by higher order B- splines, Φ(x) = Φr(x) :=Mr(x), r >1. For reasons of simplicity we treat only caser= 2, which is already typical. Since the scaling functionsM2(x) and the hat-functionH(x) =M2(x+1) generate the same spacesVj,j ∈ZZ, the investigation ofΦ(x) :=H(x) is sufficient.
First we show the property (S1), Vj⊂Vj+1, for the linear spaces gen- erated byΦ(x) =H(x) and its derivatesΦj,k(x) = 2j/2Φ(2jx−k),j, k∈ZZ. This follows from the two-scale-relation ofH(x),
H(x) =1 2
H(2x+ 1) + 2H(2x) +H(2x−1) ,
that is immediately verified using the definitionH(x) (2.4.6.6). We leave it to the reader to prove thatH(x) has also the remaining properties required for a scaling function [see Exercise 32].
The functionsf ∈Vj have a simple structure: they are continuous and piecewise linear functions on IR with respect to the partition∆j of fineness 2−j given by the knots
(2.4.6.16) ∆j :={k·2−j|k∈ZZ}.
Up to a common factor, the coefficients cj,k of the series representation f ="
k∈ZZcj,kΦj,k are given by the values off on∆j: (2.4.6.17) cj,k= 2−j/2f(k·2−j), k∈ZZ.
Unfortunately, the functionsΦ0,k(x) = H(x−k),k ∈ZZ, do not form an orthogonal system of functions: A short direct calculation shows
(2.4.6.18)
# ∞
−∞
H(x−j)H(x)dx=
2/3 forj= 0, 1/6 for|j|= 1, 0 otherwise, so that
Φ0,k, Φ0,l!=
# ∞
−∞
H(x−k+l)H(x)dx=
92/3 fork=l, 1/6 for|k−l|= 1, 0 otherwise.
As a consequence, the dual function ˜H will be different fromH, but it can be calculated explicitly. According to the proof of Theorem (2.4.6.12), H˜ is a scalar multiple of the function
g(x) =H(x)−
|k|≥1
akH(x−k),
where the sequence (ak)|k|≥1 is the unique solution of the following equa- tions:
g, Φ0,k!= H(x)−
|l|≥1
alH(x−l), H(x−k)!= 0 for all|k| ≥1
that satisfies "
k|ak|2 < ∞. By (2.4.6.18), this leads for k ≥ 1 to the equations
(2.4.6.19) 4a1+a2 = 1, (k= 1), ak−1+ 4ak+ak+1= 0, (k≥2), and fork≤ −1 to
4a−1+a−2 = 1, (k=−1), ak−1+ 4ak+ak+1= 0, (k≤ −2).
It is easily seen that with (ak) also (a−k) is a solution of these equations, so that by the uniqueness of solutionsak =a−k for all|k| ≥1. Therefore it is sufficient to find the solutionsak,k≥1, of (2.4.6.19).
According to these equations, the sequence (ak)k≥1 is a solution of the homogeneous linear difference equation
ck−1+ 4ck+ck+1= 0, k= 2,3, . . . .
The sequenceck :=θk,k≥1, is clearly a solution of this difference equa- tion, if θ is a zero of the polynomial x2+ 4x+ 1 = 0. This polynomial, however, has the two different zeros
λ:=−2 +√
3 = −1 2 +√
3, µ:=−2−√
3 = 1/λ,
with |λ| < 1 < |µ|. The general solution of the difference equation is an arbitrary linear combination of these special special solutions (λk)k≥1 and (µk)k≥1, that is, the desired solutionakhas the formak =Cλk+Dµk,k≥ 1, with appropriately chosen constantsC,D. The condition"
k|ak|2<∞ impliesD = 0 because of |µ| >1. The constantC has to be chosen such that also the first equation (2.4.6.19), 4a1+a2= 1, is satisfied, that is,
1 =Cλ(4 +λ) =Cλ(2 +√
3) =−C.
This impliesak=a−k =−λk=−(−1)k(2 +√
3)−k fork≥1 so that g(x) =H(x) +
k≥1
(−1)k(2 +√
3)−k(H(x+k) +H(x−k)).
The dual function ˜H(x) =γg(x) we are looking for is a scalar multiple of g, whereγis determined by the condition H,H˜!= 1. Using (2.4.6.18) this leads to
1
γ = H, H! −(2 +√
3)−1(H(x), H(x−1)!+ H(x), H(x+ 1)!)
=2
3 − 1
3(2 +√ 3) = 1
√3.
Therefore, the dual function ˜H(x) is given by the infinite series H˜(x) =
k∈ZZ
bkH(x−k) with coefficients
bk:= (−1)k√ 3 (2 +√
3)|k|, k∈ZZ.
The computation of the numbers γi (2.4.6.13) for the multi-resolution method (2.4.6.15) leads to simple finite sums with known terms,
(2.4.6.20)
γi=√
2 H(2x+i),H˜(x)!
=√
2
k∈ZZ
bk H(2x+i), H(x−k)!
=√
2
k∈ZZ
bk H(2x+i+ 2k), H(x)!
=√
2
k:|2k+i|≤2
bk H(2x+i+ 2k), H(x)!, because an elementary calculation shows
H(2x+j), H(x)!=
5/12 forj= 0, 1/4 for|j|= 1, 1/24 for|j|= 2, 0 otherwise.
Since 2 +√
3 = 3.73. . ., the numbers bk = O((2 +√
3)−|k|) and, conse- quently, the numbers γk converge rapidly enough to zero as |k| → ∞, so that the sums (2.4.6.14) of the multi-resolution method are well approxi- mated by relatively short finite sums.
Example.For smalli, one finds the following values ofγi:
i γi
0 0.96592 58262 1 0.44828 77361 2 −0.16408 46996 3 −0.12011 83369 4 0.04396 63628 5 0.03218 56114 6 −0.01178 07514 7 −0.00862 41086 8 0.00315 66428 9 0.00231 08229 10 −0.00084 58199
Incidentally, the symmetry relationγ−i=γiholds for alli∈ZZ[see Exercise 33].
There is a simple interpretation of this multi-resolution method: A func- tionf ="
k∈ZZcj+1,kΦj+1,k∈Vj+1is a continuous function which is piece- wise linear with respect to the partition∆j+1 ={k·2−(j+1) |k∈ZZ} of IR with [see (2.4.6.17)]
cj+1,k= 2−(j+1)/2f(k·2−(j+1)), k∈ZZ.
It is approximated optimally by a continuous function ˆf = "
kZZcj,kΦj,k which is piecewise linear with respect to the coarser partition∆j ={l·2−j| l∈ZZ}if we set forl∈ZZ
fˆ(l·2−j) :=2j/2cj,l= 2j/2
k∈ZZ
cj+1,kγ2l−k
= 1
√2
k∈ZZ
γ2l−kf(k·2−(j+1)).
The numbersγi thus have the following nice interpretation: The continuous piecewise linear (with respect to ∆0 ={l|l∈ ZZ}) function ˆf(x) with ˆf(l) :=
γ2l/√
2, l∈ZZ, is the best approximation in V0 of the compressed hat-function f(x) := H(2x) ∈V1. The translated function f1(x) :=H(2x−1) ∈V1 is best approximated by ˆf1 ∈V0 where ˆf1(l) :=γ2l−1/√
2,l∈ZZ, [see Figure 4].
...... .......................
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
f,fˆ
x
...
.
...
.
1
... ... ... ... ... ...
−3 −2 −1
3 2
1
.........................................................................................................
fˆ f
...
...
...
...
Fig. 4.The functionsf and ˆf
We now return to general scaling functions. Within the framework of multi-resolution methods it is essential that the functions Φj,k, k ∈ ZZ, form a Riesz-basis of the spaces Vj. It is of equal importance that also the orthogonal complementsWj ofVj inVj+1have similarly simple Riesz- bases: One can show [proofs can be found in e.g. Daubechies (1992), Louis et al. (1994)], that to any scaling functionΦ generating the linear spaces Vj there is a functionΨ ∈W0 with the following properties:
a)The functions Ψ0,k(x) :=Ψ(x−k),k∈ZZ, form a Riesz-basis [(see (2.4.6.1))]ofW0, and for eachj∈ZZits derivatesΨj,k(x) := 2j/2Ψ(2jx−k), k∈ZZ, form a Riesz-basis ofWj.
b)The functionsΨj,k,j, k∈ZZ, form a Riesz-basis ofL2(IR).
Each function Ψ with these properties is called a wavelet belonging to the scaling function Φ(wavelets are not uniquely determined by Φ).Ψ is called an orthonormal wavelet,if in addition the Ψ0,k, k ∈ ZZ, form an orthonormal basis of W0, Ψ0,k, Ψ0,l! = δk,l. In this case an orthonormal basis ofL2(IR) is given by the derivatesΨj,k, j, k∈ZZ, ofΨ,
Ψj,k, Ψl,m!=δj,lδk,m.
Example.To the Haar-functionΦ(2.4.6.4) belongs theHaar-waveletdefined by Ψ(x) :=
91, for 0≤x <1/2,
−1, for 1/2≤x <1, 0, otherwise.
Indeed, the relationΨ(x) =Φ(2x)−Φ(2x−1) first impliesΨ0,k∈V1for allk∈ZZ; then the orthogonality
Ψ0,k, Φ0,l= 0 for allk, l∈ZZ givesΨ0,k∈W0 for allk∈ZZ, and finally
Φ1,k(x) = 1
√2(Ψ0,k(x) +Φ0,k(x)), k∈ZZ,
provesV1=V0⊕W0. Also, sinceΨ, Ψ0,k=δk,0, the Haar-wavelet is orthonormal.
One can show that every orthonormal scaling function Φ has also an orthonormal wavelet, which even can be given explicitly by means of the two-scale-relation ofΦ[see e.g. the literature cited above]. It is very difficult however, to determine for a non-orthonormalΦan associated wavelet. The situation is better for the scaling functions Φ = Φr := Mr given by B- splines. Here, explicit formulas for the special waveletsΨr are known that have a minimal compact support (namely the interval [0,2r−1]) among all wavelets belongingMr: they are given by finite sums of the form
Ψr(x) =
3r−2
k=0
qkMr(2x−k).
However, these wavelets are not orthonormal forr >1, but one still knows explicitly their associated dual functions ˜Ψr∈W0with their usual proper- ties Ψr(x−k),Ψ˜r(x)!=δk,0fork∈ZZ[see Chui (1992) for a comprehensive account].
The example of the Haar-function shows, how simple the situation be- comes if both the scaling functionΦand the waveletΨ are orthonormal and
if both functions have compact support. A drawback of the Haar-function is that it is not even continuous, whereas already M2 is continuous and the smoothness of the B-Splines Mr with r≥3 and its wavelets Ψr even increases with r. Also for allr ≥ 1,Φr =Mr has compact support, and has a waveletΨrwith compact support. But unfortunately neitherΦr nor Ψr are orthonormal forr >1.
Therefore the following deep result of Daubechies is very remarkable:
She was the first to give explicit examples of scaling functions and associ- ated wavelets that have an arbitrarily high order of differentiability, have compact support, and are orthonormal. For a detailed exposition of these important results and their applications, we refer the reader to the rele- vant special literature [see e.g. Louis et al. (1994), Mallat (1998), and, in particular, Daubechies (1992)].
Exercises for Chapter 2
1. LetLi(x) be the Lagrange polynomials (2.1.1.3) for pairwise different support abscissasx0, . . . , xn, and letci:=Li(0). Show that
n i=0
cixji =
91 forj= 0,
0 forj= 1,2, . . . , n, (−1)nx0x1. . . xn forj=n+ 1, (a)
n i=0
Li(x)≡1.
(b)
2. Interpolate the function lnxby a quadratic polynomial atx= 10, 11, 12.
(a) Estimate the error committed forx= 11.1 when approximating lnxby the interpolating polynomial.
(b) How does the sign of the error depend onx?
3. Consider a functionf which is twice continuously differentiable on the inter- val I= [−1,1]. Interpolate the function by a linear polynomial through the support points (xi, f(xi)),i= 0, 1,x0,x1∈I. Verify that
α= 12max
ξ∈I |f(ξ)|max
x∈I |(x−x0)(x−x1)|
is an upper bound for the maximal absolute interpolation error on the interval I. Which values x0,x1 minimize α? What is the connection between (x− x0)(x−x1) and cos(2 arccosx)?
4. Suppose a functionf(x) is interpolated on the interval [a, b] by a polynomial Pn(x) whose degree does not exceedn. Suppose further thatfis arbitrarily often differentiable on [a, b] and that there existsM such that|f(i)(x)| ≤M for i = 0, 1, . . . and any x ∈ [a, b]. Can it be shown, without additional hypotheses about the location of the support abscissasxi∈[a, b], thatPn(x) converges uniformly on [a, b] tof(x) asn→ ∞?
5. (a) The Bessel function of order zero, J0(x) = 1
π
# π 0
cos (xsint)dt,
is to be tabulated at equidistant argumentsxi=x0+ih,i= 0, 1, 2,. . . . How small must the incrementh be chosen so that the interpolation error remains below 10−6 if linear interpolation is used?
(b) What is the behavior of the maximal interpolation error
0≤maxx≤1|Pn(x)−J0(x)|
asn→ ∞, ifPn(x) interpolatesJ0(x) atx=x(in):=i/n,i= 0, 1,. . ., n?
Hint: It suffices to show that|J0(k)(x)| ≤1 fork= 0, 1,. . . . (c) Compare the above result with the behavior of the error
0≤maxx≤1|S∆n(x)−J0(x)|
asn→ ∞, whereS∆nis the interpolating spline function with knot set
∆n={x(in)}andS∆ n(x) =J0(x) forx= 0, 1.
6. Interpolation on product spaces: Suppose every linear interpolation problem stated in terms of functionsϕ0,ϕ1,. . .,ϕnhas a unique solution
Φ(x) = n
i=0
αiϕi(x)
with Φ(xk) =fk,k = 0, . . ., n, for prescribed support argumentsx0, . . ., xn with xi =xj, i=j. Show the following: Ifψ0,. . ., ψm is also a set of functions for which every linear interpolation problem has a unique solution, then for every choice of abscissas
x0, x1, . . . , xn, xi=xjfori=j, y0, y1, . . . , ym, yi=yjfori=j, and support ordinates
fik, i= 0, . . . , n, k= 0, . . . , m, there exists a unique function of the form
Φ(x, y) = n ν=0
m µ=0
ανµϕν(x)ψµ(y) withΦ(xi, yk) =fik,i= 0,. . .,n,k= 0,. . .,m.
7. Specialize the general result of Exercise 6 to interpolation by polynomials.
Give the explicit form of the functionΦ(x, y) in this case.
8. Given the abscissas
y0, y1, . . . , ym, yi=yjfori=j,
and, for eachk= 0,. . .,m, the values
x(0k), x(1k), . . . , x(nkk), x(ik)=x(jk) fori=j, and support ordinates
fik, i= 0, . . . , nk, k= 0, . . . , m,
suppose without loss of generality that theykare numbered in such a fashion that
n0≥n1≥ · · · ≥nm. Prove by induction overmthat exactly one polynomial
P(x, y)≡ m µ=0
nµ
ν=0
ανµxνyµ exists with
P(x(ik), yk) =fik, i= 0, . . . , nk, k= 0, . . . , m .
9. Is it possible to solve the interpolation problem of Exercise 8 by other poly- nomials
P(x, y) = M µ=0
Nµ
ν=0
ανµxνyµ,
requiring only that the number of parametersανµagree with the number of support points, that is,
m µ=0
(nµ+ 1) = M µ=0
(Nµ+ 1) ? Hint: Study a few simple examples.
10. Calculate the inverse and reciprocal differences for the support points xi: 0 1 −1 2 −2
fi: 1 3 3/5 3 3/5
and use them to determine the rational espressionΦ2,2(x) whose numerator and denominator are quadratic polynomials and for whichΦ2,2(xi) =fi, first in continued-fraction form and then as the ratio of polynomials.
11. LetΦm,n be the rational function which solves the system Sm,n for given support points (xk, fk),k= 0, 1,. . .,m+n:
(a0+a1xk+· · ·+amxmk)−fk(b0+b1xk+· · ·+bnxnk) = 0, k= 0,1, . . . , m+n.
Show thatΦm,n(x) can be represented as follows by determinants:
Φm,n(x) =|fk, xk−x, . . . ,(xk−x)m,(xk−x)fk, . . . ,(xk−x)nfk|mk=0+n
|1, xk−x, . . . ,(xk−x)m,(xk−x)fk, . . . ,(xk−x)nfk|mk=0+n
Here the following abbreviation has been used:
αk, . . . , ζkm+n
k=0 = det
α0 · · · ζ0 α1 · · · ζ1
· ·
· ·
αm+n · · · ζm+n
.
12. Generalize Theorem (2.3.1.12):
(a) For 2n+ 1 support abscissasxkwith
a≤x0< x1<· · ·< x2n< a+ 2π
and support ordinates y0, . . . , y2n, there exists a unique trigonometric polynomial
T(x) = 12a0+ n j=1
(ajcosjx+bjsinjx) with
T(xk) =yk for k= 0,1, . . . ,2n.
(b) Ify0,. . .,y2nare real numbers, then so are the coefficientsaj,bj. Hint: Reduce the interpolation by trigonometric polynomials in (a) to (complex) interpolation by polynomials using the transformationT(x) =
"n
j=−ncjeijx. Then showc−j=cj to establish (b).
13. (a) Show that, for realx1, . . . , x2n, the function t(x) =
2n
!
k=1
sinx−xk
2 is a trigonometric polynomial
12a0+ n
j=1
(ajcosjx+bjsinjx) with real coefficientsaj,bj.
Hint: Substitute sinϕ= (1/2i)(eiϕ−e−iϕ).
(b) Prove, using (a) that the interpolating trigonometric polynomial with support abscissasxk,
a≤x0< x1· · ·< x2n< a+ 2π, and support ordinatesy0, . . . , y2n is identical with
T(x) =
2n
j=0
yjtj(x),
where
tj(x) :=
2n
!
k=0k=j
sinx−xk
2
:!2n k=0k=j
sinxj−xk
2 .
14. Show that forn+ 1 support abscissasxkwith a≤x0 < x1<· · ·< xn< a+π
and support ordinatesy0,. . .,yn, a unique “cosine polynomial”
C(x) = n
j=0
ajcosjx exists withC(xk) =yk,k= 0, 1,. . .,n.
Hint: See Exercise 12.
15. (a) Show that for any integerj
2m
k=0
cosjxk= (2m+ 1)h(j),
2m
k=0
sinjxk= 0, with
xk:= 2πk
2m+ 1, k= 0,1, . . . ,2m, and
h(j) := 1 forj= 0 mod 2m+ 1, 0 otherwise.
(b) Use (a) to derive for integersj,kthe following orthogonality relations:
2m
i=0
sinjxisinkxi= 2m+ 1
2 [h(j−k)−h(j+k)],
2m
i=0
cosjxicoskxi= 2m+ 1
2 [h(j−k) +h(j+k)],
2m
i=0
cosjxisinkxi= 0.
16. Suppose the 2π-periodic function f: IR → IR has an absolutely convergent Fourier series
f(x) = 12a0+ ∞ j=1
(ajcosjx+bjsinjx).
Let
Ψ(x) =12A0+ m j=1
(Ajcosjx+Bjsinjx)
be trigonometric polynomials with
Ψ(xk) =f(xk), xk:= 2πk 2m+ 1, fork= 0, 1,. . ., 2m.
Show that
Ak=ak+ ∞ p=1
[ap(2m+1)+k+ap(2m+1)−k], 0≤k≤m,
Bk=bk+ ∞ p=1
[bp(2m+1)+k−bp(2m+1)−k], 1≤k≤m .
17. Formulate a Cooley-Tukey method in which the array ˜β[ ] is initialized di- rectly˜
β[j] =fj
rather than in bit-reversed fashion.
Hint: Define and determine explicitly a mapσ =σ(m, r, j) with the same replacement properties as (2.3.2.6) but σ(0, r,0) =rinstead of (2.3.2.7).
18. LetN = 2n: Consider theN-vectors
f:= [f0, . . . , fN−1]T, β= [β0, . . . , βN−1]T.
(2.3.2.1) expresses a linear transformation between these two vectors, β = (1/N)T f, whereT = [tjk],tjk:=e−2πijk/N.
(a) Show thatT can be factored as follows:
T =QSP(Dn−1SP)· · ·(D1SP), whereS is theN×N matrix
S=
1 1
1 −1 . ..
1 1
1 −1
.
The matricesDl= diag(1, δ1(l),1, δ3(l),· · ·,1, δN(l)−1),l= 1,. . .,n−1, are diagonal matrices with
δlr= exp(−2πi˜r/2n−l−1), r˜= r
2l
, rodd.
Q is the matrix of the bit-reversal permutation (2.3.2.8), andP is the matrix of the followingbit-cycling permutationζ:
ζ(α0+α1·2 +· · ·+αn−12n−1) :=αn−1+α0·2 +α1·4 +· · ·+αn−22n−1. (b) Show that the Sande-Tukey method for fast Fourier transforms corre- sponds to multiplying the vectorffrom the left by the (sparse) matrices in the above factorization.