• Tidak ada hasil yang ditemukan

Error Analysis

1.4 Examples

EXAMPLE 1. This example follows up Example 4 of the previous section: given p > 0, q > 0, p ~ q, determine the root

Y= -p+j?'+q with smallest absolute value of the quadratic equation

y2 + 2py - q = O.

Input data: p, q. Result: y = cp(p, q) = - p

+

j?'+q.

The problem was seen to be well conditioned for p > 0, q > O. It was also shown that the relative input errors ep , eq make the following contribution to the relative error of the result y = cp(p, q):

-p q -p p+j?'+q

~=..==e ) p2 + q p + 2y) p2 + q e q = ) p2 + q e p +

2)

p2 + q e . q

Since

I J p;

+

q I ~

1,

the inherent error d(O)y satisfies

d(O)y

eps :;:::

-

ey(O) ,= - - ,::: y " , . 3 eps

We will now consider two algorithms for computing y = cp(p, q).

Algorithm 1: s ,= p2, t '=s + q, u'=j"i, y'= -p + u.

Obviously, p ~ q causes cancellation when y ,= - p + U is evaluated, and it must therefore be expected that the roundoff error

du ,= ej"i = eJ[T+q,

generated during the floating-point calculation of the square root fl(j"i) = j"i(1 + e),

1 el

~ eps,

will be greatly amplified. Indeed, the above error contributes the following term to the error of y:

!

du = J[T+q . e y -p +

J

p2 + q

=

!

(pJ[T+q + p2 + q)e = k . e.

q

Since p, q > 0, the amplification factor k admits the following lower bound:

k>- >0, 2p2 q

which is large, since p ~ q by hypothesis. Therefore, the proposed algorithm is not numerically stable, because the influence of rounding J[T+q alone exceeds that of the inherent error

e1°)

by an order of magnitude.

Algorithm 2: s ,= p2,

t '=s + q, u'=j"i,

v '=p + u, y'=q/v.

This algorithm does not cause cancellation when calculating v := p + u. The roundoff error du = eJ p2 + q, which stems from rounding J p2 + q, will be amplified accord- ing to the remainder map t/!(u):

u ... p + u ... - q-= t/!(u).

p+u

Thus it contributes the following term to the relative error of y:

!

at/! . du = - q . du you y(p + U)2

-q.}iT+q

(_ p + J p2 + q)(p + J p2 + q)2 . e

---,Jc...op-,2=+=q~

. e = ke p +

J

p2 + q

The amplification factor k remains small; indeed,

I

k

I

< 1, and Algorithm 2 is there- fore numerically stable.

The following numerical results illustrate the difference between Algorithms 1 and 2. They were obtained using floating-point arithmetic of 40 binary mantissa places- about 13 decimal places-as will be the case in subsequent numerical examples.

p = 1000, q = 0.018 000 000 081.

Result y according to Algorithm 1: 0.900 030 136 10810 - 5, Result y according to Algorithm 2: 0.899 999 999 99910 - 5, Exact value of y: 0.900 000 000 00010 - 5.

EXAMPLE 2. For given fixed x and integer k, the value of cos kx may be computed recursively using for m = 1, 2, ... , k - 1 the formula

cos(m + l)x = 2 cos x cos mx - cos(m - 1 )x.

In this case, a trigonometric-function evaluation has to be carried out only once, to find e = cos x. Now let

I

x

I +

0 be a small number. The calculation of e causes a small roundoff error:

c

= (1

+

e) cos x,

I

e

I

~ eps.

How does this roundoff error affect the calculation of cos kx?

cos kx can be expressed in terms of e: cos kx = cos(k arccos e) =:f(e).

Since

df k sin kx de sin x

the error e cos x of e causes, to first approximation, an absolute error (1.4.1)

in cos kx.

d cos kx == e c?s x k sin kx = e . k cot x sin kx sm x

On the other hand, the inherent error A(O)Ck (1.3.19) of the result Ck := cos kx is

A(O)Ck = [klx sin kxl + Icos kxl]eps.

Comparing this with (1.4.1) shows that A cos kx may be considerably larger than

A(O)Ck for small I x I ; hence the algorithm is not numerically stable.

EXAMPLE 3. For given x and a "large" positive integer k, the numbers cos kx and sin kx are to be computed recursively using

cos mx := cos x cos(m - 1)x - sin x sin(m - 1 )x,

sin mx := sin x cos(m - 1)x + cos x sin(m - 1 )x, m=1,2, ... ,k How do small errors t:c cos x, t:s sin x in the calculation of cos x, sin x affect the final results cos kx, sin kx? Abbreviating Cm := cos mx, Sm := sin mx, C := cos x, S := sin x, and putting

we have

m=1, ... ,k

Here U is a unitary matrix, which corresponds to a rotation by the angle x. Repeated application of the formula above gives

Now

and therefore

au ac au

as

because A commutes with U. Since U describes a rotation in [R2 by the angle x,

~ Uk = k [COS(k - 1)x

ac

sin(k - l)x

-sin(k - 1)xj cos(k-1)x'

~ Uk = k f-Sin(k - 1)x -cos(k - 1)X]

as

cos(k - 1)x -sin(k - 1)x .

The relative errors cc, Cs of C = cos x, S = sin x effect the following absolute errors of cos kx, sin kx:

(1.4.2)

f !:: 1 = f :C

Uk

H ~ 1 .

ec cos· x +

f!

Uk

H ~ C

s sin x

, xfCOS(k-l)X1 .

f-

Sin(k-l)X

1

= Cck cos U sin(k _ l)x

J

+ [;sk sm x cos(k - l)x

J'

The inherent errors .1(O)Ck and .1(O)Sk of Ck = cos kx and Sk = sin kx, respectively, are given by

(1.4.3) .1(O)Ck = [klx sin kxl

+

Icos kxl]eps,

.1(O)Sk = [klx cos kxl + Isin kxlleps.

Comparison of (1.4.2) and (1.4.3) reveals that for big k and I kx I ~ 1 the influence of the roundoff error [;c is considerably bigger than the inherent errors, while the round:

off error [;s is harmless. The algorithm is not numerically stable, albeit numerically more trustworthy than the algorithm of Example 2 as far as the computation of Ck

alone is concerned.

EXAMPLE 4. For small I x I, the recursive calculation of

based on

Cm = cos mx, Sm = sin mx, m = 1,2, ... , cos(m

+

l)x = cos x cos mx - sin x sin mx,

sin(m + l)x = sin x cos mx + cos x sin mx,

as in Example 3, may be further improved numerically. To this end, we express the differences dsm+ 1 and dCm + 1 of subsequent sine and cosine values as follows:

dCm+l :=cos(m + l)x - cos mx

= 2(cos x-I) cos mx - sin x sin mx - cos x cos mx + cos mx

= -4(sin2

~)

cos mx + [cos mx - cos(m - l)x]

dSm+1 :=sin(m + l)x - sin mx

= 2(cos x-I) sin mx + sin x cos mx - cos x sin mx + sin mx

= -4(sin2

~)

sin mx + [sin mx - sin(m - l)x].

This leads to a more elaborate recursive algorithm for computing Ck, Sk in the case x> 0:

d Cl := - 2 . sln-2 X 2 '

So :=0, Co:= 1,

and for m ,= 1,2, ... , k:

For the error analysis, note that Ck and Sk are functions of S = sin(x/2):

Ck = cos(2k arcsin s) ='q>l(S),

Sk = sin(2k arcsin s) =, q>2 (s).

An error L1s = es sin(x/2) in the calculation of S therefore causes-to a first-order approximation-the following errors in Ck:

aq> 1 . x

as

es sm

2

= es . -~-~sm -2k cos(x/2) sin kx . x 2

2k x. k

= - tan

2

sm es ,

aq>2 . x x

as

es sm

"2

= 2k tan

"2

cos kx . es ·

Comparison with the inherent errors (1.4.3) shows these errors to be harmless for small 1 x I. The algorithm is then numerically stable, at least as far as the influence of the roundoff error es is concerned.

Again we illustrate our analytical considerations with some numerical results. Let x = 0.001, k = 1000.

Algorithm Example 2 Example 3 Example 4 Exact value

Result for cos kx 0.540 302 121 124 0.540 302 305 776 0.540 302 305 865 0.540 302 305 868 140 ...

Relative error -0.3410-6 -0.1710-9 -0.5810-11

EXAMPLE 5. We will derive some results which will be useful for the analysis of algorithms for solving linear equations in Section 4.5. Given the quantities c, at. ... , an, bl , ... , bn- l with an =1= 0, we want to find the solution Pn of the linear equation (1.4.4 )

Floating-point arithmetic yields the approximate solution (1.4.5) bn =

n(c -

alb l -

.~~

- an-Ibn-I)

as follows:

So :=c;

(1.4.6) for j = 1, 2, ... , n - 1,

Sj:= f1(Sj_1 - ajbj) = (Sj_1 - aj bA1 + ,uj))(1 + CXj);

b. :=f1(s._I/a.) = (1 + 1»s._I/a.,

with

l,u

j

I, I

cx j

I, I

1>

I

~ eps. If a. = 1, as is frequently the case in applications, then 1> = 0, since b. :=S._I.

We will now describe two useful estimates for the residual

From (1.4.6) follow the equations

So - c = 0,

= s.~ ) 1 + CXj - a·b·,u· }} l ' j = 1, 2, ... , n - 1, a. b. - S. -I = 1> s. -I.

Summing these equations yields

and thereby the first one of the promised estimates:

(1.4.7) eps, .-1

I

r

I

~ -1 - - [1> . - eps

I

s. - I

I

+ j=1

I (I

S j

I

+

I

a j b j I), 1>' :=

I °

if a. = 1,

\ 1 otherwise.

The second estimate is cruder than (1.4.7). (1.4.6) gives

[ ,-1 .-1 .-1 ]1+1>

(1.4.8) b. = c

JI

(1 + CXk) - j~1 aj bA1 + ,uj)

D

j (1 + CXk)

----a::-'

which can be solved for c:

.-1 j-I .-1

(1.4.9) c = j=1

I

aj bA1 + ,uj) k=1

n

(1 + CXkt 1 + a.b.(l + 1>t 1 k=1

n

(1 + CXkt I.

A simple induction argument over m shows that

implies

(1 +a)=

Ii

(1 +ad±l,

k=1

I

a

I

~ -:--m_·_e-=-ps_

1-m·eps·

m·eps<l

In view of (1.4.9) this ensures the existence of quantities ej with (1.4.1O) C = .-1 I ajbAl + j . ej} + a.b.(l + (n - 1 + <5')e.},

j=l

I

I .::;

_-,ep,-s_

} 1 - n . eps' <5' :=

1

0 if a. = 1,

\ 1 otherwise.

For r = c - a1 b1 - a2 b2 - ... - a. b. we have consequently (1.4.11) Irl'::;l - n eps . eps

r·-

Ijla1 jbjl+(n-1+<5'}la.b.l·

1

j= 1

In particular, (1.4.8) reveals the numerical stability of our algorithm for comput- ing P •. The roundoff error rxm contributes the amount

c - a1b1 - a2b2 - ... - ambm rxm •

to the absolute error in

P •.

This, however, is at most equal to

I

ee - a1b1e.'a~ ... - ambme· .. 1

& (

iel

+

;t

1 la;b;j )eps

'" I

a.

I '

which represents no more than the influence of the input errors ee and e., of c and a;, i = 1, ... , m, respectively, provided I ee I, I e., I .::; eps. The remaining roundoff errors J1k and <5 are similarly shown to be harmless.

The numerical stability of the above algorithm is often shown by interpreting (1.4.1 O) in the sense of backward analysis: The computed approximate solution b. is the exact solution of the equation

whose coefficients

a.

:= a.(l + (n - 1 + <5')e.}

have been changed only slightly from their original values a j. This kind of analysis, however, involves the difficulty of having to define how large n can be so that errors of the form ne,

I

e

I .::;

eps can still be considered as being of the same order of magnitude as the machine precision eps.

1.5 Interval Arithmetic; Statistical Roundoff

Dalam dokumen R. Bartels, W. Gautschi, and C. Witzgall (Halaman 30-37)