Analytic-numerical solutions with a priori error bounds for a
class of strongly coupled mixed partial dierential systems
L. Jodar∗, E. Navarro, J. Camacho
Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Apartado 22.012, Camino de Vera 14, 46071 Valencia, Spain
Received 8 January 1998; received in revised form 29 October 1998
Abstract
This paper deals with the construction of analytic-numerical solutions with a priori error bounds for systems of the type
ut=Auxx, u(0; t) +ux(0; t) = 0, Bu(1; t) +Cux(1; t) = 0, 0¡ x ¡1; t ¿0, u(x;0) =f(x). HereA, B, C are matrices for which no diagonalizable hypothesis is assumed. First an exact series solution is obtained after solving appropriate vector Sturm–Liouville-type problems. Given an admissible error and a bounded subdomainD, after appropriate truncation an approximate solution constructed in terms of data and approximate eigenvalues is given so that the error is less than the prexed accuracy , uniformly in D. c1999 Elsevier Science B.V. All rights reserved.
Keywords:Coupled dierential system; Coupled boundary conditions; Analytic-numerical solution; Vector Sturm–Liouville problem; A priori error bound; Moore–Penrose pseudoinverse
1. Introduction
Coupled partial dierential systems with coupled boundary value conditions are frequent in quan-tum mechanical scattering problems [1,19,27,28], chemical physics [16,17,22], modelling of cou-pled thermoelastoplastic response of clays subjected to nuclear waste heat [13], coucou-pled diusion problems [7,20,30]. The solution of these problems has motivated the study of vector and matrix Sturm–Liouville problems [3,4,12,18]. In this paper we study systems of the type
ut(x; t)−Auxx(x; t) = 0; 0¡ x ¡1; t ¿0; (1)
u(0; t) +ux(0; t) = 0; t ¿0; (2)
∗Corresponding author.
E-mail address: [email protected] (L. Jodar).
Bu(1; t) +Cux(1; t) = 0; t ¿0; (3)
u(x;0) =f(x); 06x61; (4)
where the unknown u= (u1; : : : ; um)T and f(x) = (f1; : : : ; fm)T are m-dimensional vectors and A; B; C
arem×mcomplex matrices, elements ofCm×m. Mixed problems of the above type but with Dirichlet
conditions u(0; t) = 0, u(1; t) = 0 instead of (2), (3) have been treated in [15,23]. Here we assume that A is a positive stable matrix
Re(z)¿0 for all eigenvalues z of A; (5)
and that the pencil B+C is regular, i.e, the determinant det(B+C) =|B+C| is not identically zero. Conditions on function f(x) will be determined below in order to specify existence and well-posedness conditions.
The organization of the paper is as follows. In Section 2, vector eigenvalue dierential problems of the type
X′′(x) +2X(x) = 0; 0¡ x ¡1; t¿0; ¿0;
X(0) +X′(0) = 0;
BAjX(1) +CAjX′(1) = 0; 06j6p−1;
(6)
are treated. Sucient conditions for the existence of an appropriate sequence of eigenvalues and eigenfunctions, as well as some invariant properties of the problem, are studied. In Section 3 an exact series solution of problem (1) – (4) is obtained using results of Section 2 and the separation of variables technique. Section 4 deals with the construction of an analytic-numerical solution of the problem with a prexed accuracy in a bounded subdomain. The approximation is expressed in terms of the data and approximate eigenvalues of the underlying eigenvalue problem of the type (6).
Throughout this paper, the set of all eigenvalues of a matrix C in Cm×m is denoted by (C) and
its 2-norm denoted by kCk is dened by [11, p. 56]
kCk= sup
z6= 0
kCz k2
kzk2
;
where for a vector y in Cm, ky k
2 is the usual euclidean norm of y. Let us introduce the notation
(C) = max{Re(!); !∈(C)} and (C) = min{Re(!); !∈(C)}. By [11, p. 556] it follows that
ketC k6et(C)
m−1
X
k=0
k√mC kk tk
k! ; t¿0: (7)
IfBis a matrix inCn×m we denote byB† its Moore–Penrose pseudoinverse. An account of properties, examples and applications of this concept may be found in [5,26]. In particular the kernel of B, denoted by KerBcoincides with the image of the matrixI−B†Bdenoted by Im(I
−B†B). We say that a subspaceE ofCm is invariant by the matrixA∈Cm×mif A(E)⊂E. The propertyA(KerG)⊂KerG
is equivalent to the condition GA(I −G†G) = 0 since KerG= Im(I
−G†G), see [5]. The Moore– Penrose pseudoinverse of a matrix can be eciently computed with the MATLAB package. The set of all the real numbers will be denoted by R. The determinant of a matrix C∈Cm×m is denoted by
by z=a−ib its conjugate. Finally we denote by CD the Drazin inverse of the matrix C∈Cm×m.
We recall that CD can be computed as a polynomial in D and by [5, p. 129] it follows that
z6= 0; z∈(C) if and only if z−1∈(CD): (8)
If C is invertible then CD=C−1. An account of properties, examples and algorithms for the com-putation of the Drazin inverse may be found in Ch. 7 of [5].
2. On a class of vector eigenvalue dierential problems
Vector Sturm–Liouville systems of the form
−(P(x)y′)′+Q(x)y=W(x)y; a6x6b;
A?
1y(a) +A
?
2P(a)y
′(a) = 0;
B?
1y(b) +B?2P(b)y
′(b) = 0;
where P; Q and W are symmetric m×m matrix functions of x with P and W positive denite for all x∈[a; b]; y is an n-vector function of x, is a scalar parameter, and A1; A2; B1 and B2 are
m×m matrices such that (A1; A2), (B1; B2) are full rank m×2m matrices, with A?1A2−A?2A1= 0;
B?
1B2−B?2B1= 0, have been recently treated in [3,4,12,18], and arise in a natural way in quantum mechanical applications. The recent literature on quantum mechanical scattering problems related to these problems includes [1,16,17,22]. In this section we consider vector eigenvalue dierential problems of a dierent nature, in the sense that we admit more than two boundary value conditions and under dierent hypotheses. If p ¿1, we consider the vector problem
X′′(x) +2X(x) = 0; 0¡ x ¡1; ¿0;
X(0) +X′(0) = 0;
BAjX(1) +CAjX′(1) = 0; 06j6p−1;
(9)
where A; B; C are matrices in Cm×m such that the matrix pencil B+C is regular, i.e.,
|B+C|is not identically zero: (10) Under hypothesis (10), since the determinant |B+C| is a polynomial of degree m, the matrix
B+C is singular at most for m dierent values of . Hence
There exists complex numbers0 such that B+0C is invertible: (11)
Assume that
There exists an eigenvalue 0 of the matrix (B+0C)−1C; (12)
such that
(1 +0)06= 1 and
0 1−0(1 +0)
is a real number: (13)
The general solution of the vector equation X′′+2X = 0 is given by
X(x) =
(
sin( x)D+ cos( x)E; D; E∈Cm; ¿0;
Condition X(0) +X′(0) = 0 produces
X(x) =
(
(sin( x)−cos( x))D; ¿0;
D0(x−1); = 0:
(14)
By imposing the boundary value conditions BAjX(1) +CAjX′(1) = 0; 06j6p−1, one gets that the vector D must satisfy
[(2C+B) sin() +(C
−B) cos()]AjD
= 0; 06j6p−1; ¿0 (15)
and
CD0= 0; = 0: (16)
In order to obtain nonzero solutions of problem (9), for ¿0, the vector D must be nonzero. By
(15) one gets the condition
H= (2C+B) sin() +cos()(C−B) is singular: (17)
Under hypothesis (11) we can write
H= sin()[(2−0)C+B+0C] +cos()[(0+ 1)C−(B+0C)]; ¿0; (18)
(B+0C)−1H= sin()[(2−0)(B+0C)−1C+I] +cos()[(0+ 1)(B+0C)−1C−I]: (19)
Under hypothesis (12), taking values of ¿0 satisfying
sin()(1 + (2
−0)0) +cos()((1 +0)0−1) = 0 (20) by (19) and the spectral mapping theorem [8, p. 569], the matrix (B+0C)−1H is singular. Hence
H is singular for values of ¿0 satisfying (20). Note that by (13), if ¿0 satises Eq. (20) one
gets sin()6= 0 and Eq. (20) is equivalent to
cot() =a2+ 1 +a; a= 0 1−0(1 +0)
; ¿0: (21)
It is easy to show that Eq. (21) has an innite sequence of solutions {k} whose location depends
on the parameter a in the following way:
Case 1: (a ¿0)
k¡ k¡(2k+ 1)
2; k¿1:
Case 2: (a= 0)
0= 0 and k¡ k¡(2k+ 1)
2; k¿1:
Case 3: (−4=(4 +2)¡ a ¡0). Here we have innitely many subcases. Let
J =
−4
4 +2;0
=[
j¿0 ˙
Jj; J˙j=
−4
4 + (2j+ 1)22;
−4
4 + (2j+ 3)22
:
Then, if
a=− 4
one gets the sequence {k; j}k¿1 of solutions of (21) satisfying
0¡ 1¡
2; k¡ k; j¡(k+ 1)
2; 16k6j and
j+1; j= (2j+ 1)
2; (2k+ 1)
2¡ k; j¡(k+ 1); k¿j+ 1:
If a∈J˙j; j¿0, then the solution sequence {k; j}k¿1 of (21) is located as follows:
0¡ 1¡
2; k¡ k; j¡(2k+ 1)
2; 16k6j+ 1 and
(2k+ 1)
2¡ k; j¡(k + 1); k¿j+ 1:
Case 4: (a=−4=(4 +2))
1=
2; (2k−1)
2¡ k¡ k; k¿2:
Case 5: (a ¡−4=(4 +2))
(2k−1)
2¡ k¡ k; k¿1:
The following lemma provides information about the eigenvalues 0 of the matrix (B+0C)−1C verifying (12).
Lemma 2.1. Let 0 and 0 be complex numbers satisfying conditions (10) and (11); respectively.
Assume that 0 satises one of the following conditions: (i) 0= 0;
(ii)06= 0and0=1=(0+i Im(0));where0 is a real eigenvalue of the matrix((B+0C)−1C)D− Im(0)I satisfying
06= 1 + Re(0): (22)
Then 1−0(1 +0)6= 0 and 0=(1−(1 +0)0) is a real number.
Proof. If 0= 0 the result is immediate. Under hypothesis (ii) it follows that 1
0
=0+ i Im(0); (23)
and 1=0∈(((B+0C)−1C)D). By (8) it follows that 0∈((B+0C)−1C). By (22) one gets 1−0(1 +0) = 1−(0+ 1)=(0+ i Im(0))6= 0 and by (23)
0
1−(0+ 1)0 −
0 1−(0+ 1)0
= 1=0−1=0−(0−0) [1=0−(0+ 1)][1=0−(0+ 1)]
= 0:
Remark 1. Note that if 0∈R then 0 is any real eigenvalue of the matrix (B+0C)−1C.
Let F(0; 0) be the eigenvalue set of problem (9) and note that 0∈F(0; 0) if and, only if 0= 0 is an eigenvalue of C, that is, if C is a singular matrix. If ¿0 satises (21) by (18) one gets
1−cot() =0(2−0+ (0+ 1)cot()); (24)
1
sin()H= ( 2
−0+ (0+ 1)cot())C+ (1−cot())(B+0C);
1
sin()H= ( 2
−0+cot()(1 +0))[C−0(B+0C)]: (25)
Note that if ∈F(0; 0) with ¿0, then 2−
0+ (0+ 1)cot()6= 0 because otherwise by (24), 1−cot() = 0 and by (21) we would have 2+ 1 = 0, contradicting that ¿0. Hence (25) can be written in the equivalent form
1 (2−
0+cot()(0+ 1))sin()
H=C−0(B+0C): (26)
Let G(0; 0) be the matrix in Cmp×m dened by
G(0; 0) =
   
C−0(B+0C) [C−0(B+0C)]A
.. .
[C−0(B+0C)]Ap−1
   
: (27)
Condition (15) is equivalent to the condition
G(0; 0)D= 0; ∈F(0; 0): (28)
Eq. (28) admits nonzero vector solutions D∈Cm if
rankG(0; 0)D¡ m: (29)
Thus, if D6= 0 satises (28), the vector functions
X(x) = [sin( x)−cos( x)]D (30)
are eigenfunctions of problem (9). If C is a singular matrix, then from (28), = 0 is also an eigenvalue of problem (9) and if CD0= 0, D06= 0, the function
X0(x) = (x−1)D0; D0∈Cm; CD0= 0; (31) is an eigenfunction of problem (9). Suppose that
(1 +0)0= 1: (32)
Substituting this condition into (20) one gets
(2+ 1)
0sin() = 0 (33)
and since 06= 0, by (33) it follows that
By (17) and the spectral mapping theorem [8, p. 569], the matrix H is singular if and only if C−B
is singular and Eq. (28) takes the form
(C−B)AjD
= 0; 06j6p−1; ¿0: (35)
By (34) the positive eigenvalue set of problem (9) in this case is F(0; 0) ={k; k¿1} and the corresponding eigenfunctions are
X(x) = (sin(kx)−kcos(kx))Dk;
G(0; 0)Dk=
   
C−B
(C−B)A
.. . (C−B)Ap−1
   
Dk= 0; Dk6= 0
      
: (36)
Summarizing, the following result has been established:
Theorem 2.1. Let p¿1 be an integer;assume that pencil B+C is regular and let 0 be dened by (11). Assume that 0 is a eigenvalue of (B+0C)−1C satisfying (13) and that matrix G(0; 0)
dened by(27) satises(29). Then problem(9) admits a sequence of real non-negative eigenvalues F(0; 0).If ∈F(0; 0)is an eigenvalue,the associated eigenfunction set is given by (14) where D is a nonzero m-dimensional vector lying inKerG(0; 0).The explicit expression for D is given
by
D= (I −[G(0; 0)†G(0; 0)])S; (37)
where S is a nonzero arbitrary vector in Cm.
Proof. The proof is a consequence of previous comments and Theorem 2:3:2 of [26] that provides the general solution of G(0; 0)D= 0 in the form (37).
The following result shows that eigenvalues and eigenfunctions of problem (9) are independent of the chosen number 0 verifying (11). Properties (13) or (32), and (29) are also invariant.
Theorem 2.2. Let 06=1 be complex numbers such that B+0C and B+1C are invertible
matrices in Cm×m; then the following properties hold:
(i) If 0∈ (B+0C)−1C
then 1−(0−1)06= 0 and
1=
0
1−0(0−1)∈
((B+0C)−1C):
(ii)If 0; 1; 0; 1 are dened as in(i);then (0−0)satises (13) if and only if (1; 1)satises (13). Furthermore; the eigenvalues of(9) are invariant when (1; 1) replaces (0; 0)in conditions (11)–(13).
(iii) Eigenfunctions of problem (9) corresponding to the pair (0; 0)∈R2 coincide with those
associated to (1; 1) by (i). Furthermore; Ker((B+0C)−1−0I) = Ker((B+1C)−1−1I) and
Proof. (i) If 0= 0∈((B+0C)−1C), then the matrix C is singular and hence 1= 0 also belongs to ((B+1C)−1C). If 06= 0, by the properties of determinants one gets
0 =|(B+0C)−1C−0I|=|C−0(B+0C)|=|C−0(B+1C+ (0−1)C)|
=|[1−0(0−1)]C−0(B+1C)|: (38)
Since B+1C is invertible, by the last equation it follows that 1−0(0−1)6= 0 because otherwise 0 =|0(B+1C)|, contradicting the invertibility of (B+1C). By (38) we can write
0 =
C−
0 1−0(0−1)
(B+1C)
=
(B+1C)−1C−
0I 1−0(0−1)
:
Hence
0 1−0(0−1)
=1∈((B+1C)−1C):
(ii) Note that if 0= 0 then 1= 0 and Eq. (20) is the same replacing 1 by 0. Thus F(0;0) =
F(
1;0). If 06= 0, by part (i), 16= 0 and
1 1−(1+ 1)1
= 1 1=1−(1+ 1)
= 1
1=0−(0−1)−(1+ 1)
= 1 1=0−(0+ 1)
= 0 1−0(0+ 1)
with
1−11 1−(1+ 1)1
= 1− 1 1−(1+ 1)1
= 1− 0 1−(0+ 1)0
= 1−00 1−(0+ 1)0
:
Hence the coecients of Eq. (20) are the same as the corresponding coecients when 0 and 0 are replaced by 1 and 1, respectively. This proves that F(0; 0) =F(1; 1) and these eigenvalues sets of problem (9) are invariant when (1; 1) replaces (0; 0).
(iii) By Theorem 2.1, and parts (i) and (ii) of this theorem one gets that eigenfunctions of problem (9) are invariant replacing (1; 1) by (0; 0). Vectors D appearing in (28) for F(0; 0) are the same as those appearing for F(1; 1). In order to prove this we show that KerG(0; 0) =
KerG(1; 1). First we prove that Ker((B+0C)−1C−0I) = Ker((B+1C)−1C−1I). Let y6= 0 be a vector in Cm such that ((B+0C)−1C−0I)y= 0. Hence
0 = [C−0(B+0C)]y= [C−0(B+1C+ (0−1)C)]y
= [1−0(0−1)C−0(B+1C)]y;
0 =
C− 0
1−0(0−1)
(B+1C)
y= [C−1(B+1C)]y
and
By denition of G(0; 0) given by (27) we have
C−0(B+0C) = (1−0(0−1))[C−1(B+1C)];
G(0; 0) = (1−0(0−1))G(1; 1):
(39)
Hence G(0; 0)D= 0 if and only if G(1; 1)D= 0 and the result is established.
3. Construction of an exact series solution
Let us seek solutions v(x; t) of the boundary value problem (1) – (4) under hypotheses (11) – (13). The separation of variables technique suggests
v(x; t) =T(t)X(x); T(t)∈Cm×m; X(x)∈Cm; ¿0; (40)
where
T′(t) +2AT
(t) = 0; t¿0; ¿0; (41)
X′′(x) +2X(x) = 0; 0¡ x ¡1; ¿0;
X(0) +X′(0) = 0;
BX(1) +CX′(1) = 0:
(42)
The solution of (41) satisfying T(0) =I, is T(t) = exp(−2At), but although v(x; t) dened by
(40) satises (3)
@
@t(v(x; t))−A @2
@x2(v(x; t)) =T ′
(t)X(x)−AT(t)X′′(x)
=−2AT(t)X(x) +AT(t)2X(x) = 0;
v(0; t) +
@
@x(v(0; t)) =T(t)(X(0) +X
′
(0)) = 0;
condition (4) is not guaranteed because
Bv(1; t) +C
@
@x(v(1; t)) =BT(t)X(1) +CT(t)X
′
(1)
=Bexp(−2At)X(1) +Cexp(−2At)X′(1) (43)
and the last equation does not vanish because matrices B and C do not commute with A. However, if X(x) satises (42) together with condition
BAjX(1) +CAjX′(1) = 0; 16j6p−1; (44)
where p is the degree of the minimal polynomial of A, that is, problem (9) for this value of p, then we show now that v(x; t) dened by (40) satises (3) – (4). In fact, for each t¿0, the matrix
exponential T(t) = exp(−2At) can be expressed as a matrix polynomial of A [8, p. 557],
where bj(t); 06j6p−1 are scalars. Under hypothesis (44), by (43) and (45) one gets
Assume the hypotheses and notation of Theorem 2.1, let F(0; 0) be the eigenvalue set of problem
(9) and consider the candidate series solution of problem (1) – (4) of the form
U(x; t) =
m so that the initial condition (4) holds true. By
imposing (4) on (47) one gets that these vectors must satisfy
f(x) =
In order to guarantee well-posedness let us assume:
convergent to f(x) in [0;1] for each j with 16j6m. Now we study conditions so that vectors
Dn = (dn;1; dn;2; : : : ; dn; m)
T; D
0= (d0;1; d0;2; : : : ; d0; m)T dened by (52) – (53) satisfy (28) and (31),
respectively. Assume that
(C−0(B+0C))f(x) = 0; 06x61 (54)
and
Ker(C−0(B+0C)) is an invariant of A: (55)
Taking into account that under hypothesis (10) we always have real values 0 satisfying (11), from Theorems 2.1 and 2.2, without losing generality we may assume 0∈R and that 0 is a real eigenvalue of (B+0C)−1C.
Under hypotheses (54) and (55) one gets
G(0; 0)Dn= 0; n∈F(0; 0); n¿0: (56)
Finally we prove that under hypothesis (5) series (47) with coecients D dened by (52) – (53) is
a solution of problem (1) – (4). By inequality (7) it is easy to prove that in any set
D(t0) ={(x; t); 06x61; t¿t0¿0};
series (57) as well as those appearing after twice termwise partial dierentiation with respect to x
and once partial dierentiation with respect to t, namely
X
n¿1
n2e−2nAtX
n(x);
X
n¿1 (−2
n)Ae
−2nAtX
n(x);
are uniformly convergent in D(t0). By the dierentiation theorem of functional series [2, p. 403], the series dened by (47), (52), (53) is twice partially dierentiable with respect to x, once with respect to t and satises (1) – (4). Summarizing, by the convergence theorems of Sturm–Liouville series expansions [9,14], the following result has been established.
Theorem 3.1. With the hypothesis and the notation of Theorem 2:1; assume that f(x) satises
(51) and (54); A is positive stable matrix and
[C−0(B+0C)]A{I −[C−0(B+0C)]†[C−0(B+0C)]}= 0: (57)
Then U(x; t) dened by (47); (52); (53) is a solution of problem (1)–(4).
Proof. By the previous comments and the equivalence of conditions (57) and (55) the result is established.
Now we construct a series solution of problem (1) – (4) under weaker hypotheses on the function
f(x) appearing in (4).
Assume that apart from hypothesis (11),
Let R and Ri be the matrices dened by
Ri = k
Y
j=1
j6=i
[(B+0C)−1C−0(j)]; 16i6k
R =
k
Y
j=1
[(B+0C)−1C−0(j)I]:
(59)
If E= KerR, then by the descomposition theorem [10, p. 536] we have
E= Ker[(B+0C)−1C−0(1)I]⊕ · · · ⊕ Ker[(B+0C)−1C−0(k)I]: (60)
Note that polynomials
Qi(x) = k
Y
j=1
j6=i
(x−0(j)); 16i6k;
are coprime, and by Bezout’s Theorem [10, p. 538], there exist numbers 1; 2; : : : ; k such that
1 =
k
X
i=1
iQi(x) =Q(x):
Taking x=0(i) one gets that
i=
  
k
Y
j=1
j6=i
(0(i)−0(j))
  
−1
:
Q(x) is the Lagrange interpolating polynomial andI=Pk
i=1iRi. Hence one gets the descomposition
f(x) =
k
X
i=1
iRif(x) = k
X
i=1
gi(x); gi(x) =iRif(x): (61)
If Rf(x) = 0;06x61, then gi(x) is the projection of f(x) on the subspace Ker((B+0C)−1C−
0(i)I) since
[(B+0C)−1C−0(i)I]gi(x) =iRf(x): (62)
Under the hypothesis
Rf(x) = 0; 06x61; (63)
by (62), it follows that
[(B+0C)−1C−0(i)I]gi(x) = 0; 06x61: (64)
Assume that gi(x) dened by (61) satises
gi(x) is twice continuously dierentiable in [0;1] with
gi(0) +g′i(0) = 0; (1−00(i))gi(1) +0(i)g′i(1) = 0; 16i6k;
and
[(C−0(i)(B+0C)]A[I −[C−B+0C)] †
[C−0(i) (B+0C)]] = 0; 16i6k: (66)
Under these conditions (11) and the positive stability of matrix A, a series solution ui(x; t) of the
problem
is given by Theorem 3.1. By (61), then the function
U(x; t) =
k
X
i=1
ui(x; t); (67)
is a solution of problem (1) – (4). Summarizing the following result has been established.
Theorem 3.2. Let A be a positive stable matrix;assume that the pencilB+C is regular and let0
be a real number satisfying(11).Suppose that matrix(B+0C)−1C has k dierent real eigenvalues
0(1); 0(2); : : : ; 0(k) such that condition (66) holds for 16i6k. Let R andRi be matrices dened
by (59) and let E be the subspace dened by(60). Letf(x)be a twice continuously dierentiable function on [0;1] satisfying (63) and
Rif(0) +Rif′(0) = 0;
(1−00(i))Rif(1) +0(i)Rif′(1) = 0; 16i6k:
(68)
Then conditions (65) hold true and problem (1)–(4) admits a solution given by (67) where ui(x; t)
is the solution of problem (Pi) constructed by Theorem 3:1.
Example 3.1. Consider problem (1) – (4) where
A=
Since B is invertible, the pencil B+C is regular and condition (11) is satised with 0= 0. Let
In this case we have that hypotheses (13) hold true for 0(i); i= 1;2, and
In this case we have
R†1=
and thus the subspaces Ker (H−2I) and KerH are invariant by the matrix A. Note thatAis positive stable because (A) ={1;2}. Condition (68) in this case takes the form
4. Analytic-numerical solutions with a priori error bounds
The series solution provided in Section 3 presents some computational drawbacks. Firstly there is the inniteness of the series. Secondly, eigenvalues are not exactly computable in spite of well-known ecient algorithms, see [24,25,18]. Finally the computation of matrix exponentials is not an easy task [21,29]. In this section we address the following question. Given an admissable errorj¿0 and
a domain D(t0; t1) ={(x; t); 06x61;0¡ t0¡ t1} how to construct an approximation avoiding the above inconveniences and whose error with respect to the exact solution is less than j uniformly
in D(t0; t1). It is sucient to develop the approach when the exact series solution is given by (67) with k= 1.
With the notation of Section 3, we have kDnk
2=Pm
j=1|Dn; j|
2, and by Parseval’s inequality [3, p. 223, 6], one gets
|Dn; j|
26
Z 1
0 |
fj(x)|2dx; n¿0; 16j6m; (71)
kDn k
2 6
m
X
j=1
Z 1
0 |
fj(x)|2dx=
Z 1
0 k
f(x)k22 dx=M; n¿0: (72)
By (48) and (71) one gets
kXn(x)k
26(1 +
n)M; n¿0: (73)
By (7), for t1¿t¿t0 it follows that
ke−2nAt k2
26e −(A)t02n
m−1
X
j=0
(kAkt1√m)j
j!
2j
n: (74)
Let k and ’k be the scalar functions dened for s ¿0 by
k(s) = e−s
2
(A)t0sk; ’
k(s) = (k+ 2) ln(s)−s2(A)t0; 06k62m−1: (75) Note that
’′k(s)¡0 for s ¿ s
k+ 2
t0(A)
=sk; 06k62m−1: (76)
Take s′
k¿sk such that
(k+ 2) ln(s)−s2(A)t
0¡0; s¿s′k; (77)
then by (77) it follows that
k(s) =ske−s
2(A)t
0¡ s−2; s¿s′
k; 06k62m−1: (78)
Since limn→+∞n= +∞, and n¡ n+1, let n0 be the rst positive integer so that
n0¿ s
?= max
By (72) – (79) it follows that:
because in each of the ve cases quoted in Section 2, the eigenvalues n satisfy
expression (84):
Note that depending on the ve cases quoted in Section 2 and the interval where the eigenvalues are located, these constants , and 1 are always available. It is easy to show that
|sin( ˜nx)−˜ncos( ˜nx)−sin(nx) +ncos(nx)|6|n−˜n|(2 +n);
By the Cauchy–Schwarz inequality for integrals it follows that
Let us write
then by (84), (86), (98) and (99) it follows that
take the rst positive integer q0 such that
(2t
1 kAk)q+1 (q+ 1)! ¡
6n1ekAkt1
2
(1 +)2(R1
0 kf(x)k22 dx)1=2
; (103)
then by (86), (101) and (103) it follows that ˜u(x; t; n1; q0) dened by
˜
u(x; t; n1; q0) =
      
      
X0(x) +
n1 X
n=1
q0 X
k=0
(−˜2ntA)k
k! X˜n(x); 0= 0;
n1 X
n=1
q0 X
k=0
(−˜2ntA)k
k! X˜n(x); 06= 0;
(104)
satises
kU˜(x; t)−u˜(x; t; n1; q0)k2¡
3; t06t6t1; 06x61 (105) and by (85), (102) and (105) one concludes
ku(x; t)−u˜(x; t; n1; q0)k2¡ ; t06t6t1; 06x61: (106) Summarizing, the following result has been established.
Theorem 4.1. With the hypotheses and the notation of Theorem 3:1 and assuming that f6= 0; let
; and 1 be dened by (89). Let ¿0; t0¿0; D(t0; t1) ={(x; t); 06x61; t06t6t1} and let n1
be chosen so that (82) holds. Take the rst positive integer q0 satisfying (103). Then u˜(x; t; n1; q0)
dened by(104) is an approximation of the exact series solutionu(x; t)dened by (47); (52); (53)
satisfying (106).
Acknowledgements
This work has been supported by Generalitat Valenciana grants GV-CCN-1005796, GV-97-CB-1263 and the Spanish D.G.I.C.Y.T grant PB96-1321-CO2-02.
References
[1] M.H. Alexander, D.E. Manopoulos, A stable linear reference potencial algorithm for solution of the quantum close-coupled equations in molecular scattering theory, J. Chem. Phys. 86 (1987) 2044 –2050.
[2] T.M. Apostol, Mathematical Analysis, Addison-Wesley, Reading, MA, 1977.
[3] F.V. Atkinson, Discrete and Continuous Boundary Value Problems, Academic Press, New York, 1964.
[4] F.V. Atkinson, A.M. Krall, G.K. Leaf, A. Zettel, On the numerical computation of eigenvalues of Sturm–Liouville problems with matrix coecients, Tech. Rep. Argonne National Laboratory, 1987.
[5] S.L. Campbell, C.D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman Pub. Co., London, 1979. [6] E.A. Coddington, N. Levinson, Theory of Ordinary Dierential Equations, McGraw-Hill, New York, 1967. [7] J. Crank, The Mathematics of Diusion, Second ed., Oxford Univ. Press, Oxford, 1995.
[8] N. Dunford, J. Schwartz, Linear Operators, Part I, Interscience, New York, 1957.
[11] G. Golub, C.F. Van Loan, Matrix Computations, The Johns Hopkins Univ. Press, Baltimore, 1989.
[12] L. Greenberg, A Prufer Method for Calculating Eigenvalues of Self-Adjoint Systems of Ordinary Dierential Equations, Parts 1 and 2, University of Maryland, Tech. Rep., TR91-24.
[13] T. Hueckel, M. Borsetto, A. Peano, Modelling of coupled thermo-elastoplastic-hydraulic response of clays subjected to nuclear waste heat, in: R.W. Lewis, E. Hinton, P. Bettess, B.A. Schreer (Eds.), Numerical Methods in Transient and Coupled Problems, Wiley, New York, 1987, pp. 213 –235.
[14] E.L. Ince, Ordinary Dierential Equations, Dover, New York, 1927.
[15] L. Jodar, E. Ponsoda, Continuous numerical solution and error bounds for time dependent systems of partial dierential equations: mixed problems, Comput. Math. Appl. 29 (8) (1995) 63 –71.
[16] R.D. Levine, M. Shapiro, B. Johnson, Transition probabilities in molecular collisions: computational studies of rotational excitation, J. Chem. Phys. 52 (1) (1970) 1755 –1766.
[17] J.V. Lill, T.G. Schmalz, J.C. Light, Imbedded matrix Green’s functions in atomic and molecular scattering theory, J. Chem. Phys. 78 (7) (1983) 4456 –4463.
[18] M. Marletta, Theory and Implementation of Algorithms for Sturm–Liouville Systems, Ph.D. Thesis, Royal Military College of Science, Craneld 1991.
[19] V.S. Melezhik, I.V. Puzynin, T.P. Puzynina, L.N. Somov, Numerical solution of a system of integrodierential equations arising the quantum-mechanical three-body problem with Coulomb interaction, J. Comput. Phys. 54 (1984) 221 –236.
[20] M.D. Mikhailov, M.N. Osizik, Unied Analysis and Solutions of Heat and Mass Diusion, Wiley, New York, 1984. [21] C.B. Moler, C.F. Van Loan, Nineteen dubious ways to compute the exponential of a matrix, SIAM Rev. 20 (1978)
801 –836.
[22] F. Mrugala, D. Secrest, The generalized log-derivate method for inelastic and reactive collisions, J. Chem. Phys. 78 (10) (1983) 5954 –5961.
[23] E. Navarro, E. Ponsoda, L. Jodar, A matrix approach to the analytic-numerical solution of mixed partial dierential systems, Comput. Math. Appl. 30 (1) (1995) 99 –109.
[24] J.D. Pryce, Numerical Solution of Sturm–Liouville Problems, Clarendon Press, Oxford, 1993.
[25] J.D. Pryce, M. Marletta, Automatic solution of Sturm–Liouville problems using Pruess method, J. Comput. Appl. Math. 39 (1992) 57 –78.
[26] C.R. Rao, S.K. Mitra, Generalized Inverse of Matrices and its Applications, Wiley, New York, 1971. [27] W.T. Reid, Ordinary Dierential Equations, J. Wiley, New York, 1971.
[28] M. Shapiro, G.G. Balint-Kurti, A new method for the exact calculation of vibrational-rotational energy levels of triatomic molecules, J. Chem. Phys. 71 (3) (1979) 1461 –1469.
[29] R.B. Sidje, Expokit: a software package for computing matrix exponentials, ACM, Trans. Math. Software 24 (1998) 130 –156.