• Tidak ada hasil yang ditemukan

Journal of Computational and Applied Mathematics

N/A
N/A
Protected

Academic year: 2024

Membagikan "Journal of Computational and Applied Mathematics"

Copied!
11
0
0

Teks penuh

(1)

Contents lists available atScienceDirect

Journal of Computational and Applied Mathematics

journal homepage:www.elsevier.com/locate/cam

Numerical approximation of a Tikhonov type regularizer by a discretized frozen steepest descent method

Santhosh George * , M. Sabari

Department of Mathematical and Computational Sciences, NIT Karnataka, Mangaluru-575 025, India

a r t i c l e i n f o

Article history:

Received 18 December 2015

Received in revised form 29 December 2016

MSC:

47A52 65J15 Keywords:

Nonlinear ill-posed problem Steepest descent method Balancing principle

a b s t r a c t

We present a frozen regularized steepest descent method and its finite dimensional realization for obtaining an approximate solution for the nonlinear ill-posed operator equationF(x)=y.The proposed method is a modified form of the method considered by Argyros et al. (2014). The balancing principle considered by Pereverzev and Schock (2005) is used for choosing the regularization parameter. The error estimate is derived under a general source condition and is of optimal order. The provided numerical example proves the efficiency of the proposed method.

©2017 Elsevier B.V. All rights reserved.

1. Introduction

Inverse problems arise in many practical applications, such as inverse scattering problem and tomographic, parameter identification in partial differential equations (see [1–3]). They can be modeled as an operator equation

F(x)

=

y

,

(1.1)

whereF

:

D(F)

X

Yis a nonlinear Fréchet differentiable operator between the Hilbert spacesXandY

.

Throughout this study,D(F)

, ⟨· , ·⟩

and

∥ · ∥ ,

respectively, stand for the domain of F

,

innerproduct and norm which can always be identified from the context in which they appear. Fréchet derivative ofF is denoted byF(

·

) and its adjoint byF(

·

)

.

Further we assume that Eq.(1.1)has a solutionx

ˆ ,

which is not depending continuously on the right-hand side datay

.

The problems in which the solutionx

ˆ

is not depending continuously on the right hand data are called ill-posed problems. It is a common practice to use iterative methods or iterative regularization methods for approximatingx

ˆ .

For example, Landweber method [4,5], Levenberg–Marquardt method [6], Gauss–Newton [7,8], Conjugate Gradient [9], Newton-like methods [10,11], TIGRA (Tikhonov-gradient method) [12].

It is assumed further that we have only approximate datayδ

Ywith

y

yδ

∥ ≤ δ.

The steepest descent method was considered by Scherzer [13], Neubauer and Scherzer [14] for approximately solving(1.1).

In general, the steepest descent method for(1.1)withyδin place ofycan be written as

xk+1

=

xk

+ α

ksk

,

(1.2)

*

Corresponding author.

E-mail addresses:[email protected](S. George),[email protected](M. Sabari).

http://dx.doi.org/10.1016/j.cam.2017.09.022 0377-0427/©2017 Elsevier B.V. All rights reserved.

(2)

whereskis the search direction taken as the negative gradient of the minimization functional involved and

α

kis the descent.

For solving Eq.(1.1)withyδ in place ofy

,

method(1.2)was studied by Scherzer [13] whensk

= −

F(xk)(F(xk)

yδ) and

α

k

=

sk

F(xk)sk

,

and Neubauer and Scherzer [14] studied (the minimal error method) method (1.2)whensk

=

F(xk)(F(xk)

yδ) and

α

k

=

F(xk)yδ

F(xk)sk

.

For linear operatorF

,

Gilyazov [15] studied (

α

-process) method(1.2)when sk

= −

F(xk)(F(xk)

yδ) and

α

k

=

(FF)α,sk

(FF)αsk,FFsk. Vasin [16] considered a regularized version of the steepest descent method in whichsk

= −

F(xk)(F(xk)

yδ)

+ α

(xk

x0) and

α

k

=

sk2

F(xk)sk2+αsk2

.

Here and belowx0is the initial guess. Also, observe that the TIGRA-method of Ramlau [12] is of the form(1.2)withsk

= −[

F(xk)(F(xk)

yδ)

+ α

k(x0

xk)

]

and

α

k

= β

k

.

Note that, in all these methods, one has to compute the Fréchet derivative ofF at each iteratexkand

α

kin each iteration step which is in general very expensive.

In the present study, we consider a modified form of(1.2), namely the frozen regularized steepest method defined for eachk

=

0

,

1

,

2

, . . .

by

xk+1

=

xk

− β [

F(x0)(F(xk)

yδ)

+ α

(xk

x0)

] ,

(1.3) wherex0is the initial point,

β >

0 is a fixed parameter and

α >

0 is the regularization parameter. Further, note that in method(1.3), we have frozen the Fréchet derivative atx0throughout the iteration. That is why, we call method(1.3)the frozen regularized steepest method. This is one of the advantage of the proposed method. Observe that(1.3)is of the form (1.2)withsk

= −[

F(x0)(F(xk)

yδ)

+ α

(xk

x0)

]

and

α

k

= β

for eachk

=

1

,

2

, . . . .

Since

α

k

= β

one need not have to compute

α

kin each step as in the earlier studies such as [13,14,16]. In other words, the computational work is reduced considerably in the proposed method(1.3).

Note that method(1.3)coincide with the method considered in [17] when

β =

1

,

but our convergence analysis is different from that of [17] and is based on the property of the norm of a self adjoint operator in the Hilbert space (see Section2).

Moreover the condition on the radius of the convergence ball in [17] is too restrictive than the condition in this study. The numerical experiments (see comparisonTable 1) also show that the method considered in this paper provides better error estimate than that of the method considered in [17]. We also consider the finite dimensional realization of the method(1.3) in Section3. The error analysis and the algorithm for implementing the method(1.3)are given in Section4. Finally the numerical results are given in Section5.

2. Convergence analysis of method(1.3)

Denote byBr(x)

,

Br(x) the open and closed ball inX, respectively, with centerx

Xand of radiusr

>

0

.

Throughout this paper we assume that the operatorFsatisfies the following assumptions.

Assumption 2.1.

(a) There exists a constantk0

>

0 such that for everyx

D(F) and

v ∈

X

,

there exists an elementΦ(x

,

x0

, v

)

Xsatisfying

[

F(x)

F(x0)

] v =

F(x0(x

,

x0

, v

)

, ∥

Φ(x

,

x0

, v

)

∥ ≤

k0

∥ v ∥∥

x

x0

∥ .

(b)

F(x0)

∥ ≤

M

.

Notice that in the literature the stronger than (a) condition (a)

[

F(x)

F(z)

] v =

F(z)

ξ

(x

,

z

, v

)

, ∥ ξ

(x

,

z

, v

)

∥ ≤

K

∥ v ∥∥

x

z

is used for some

ξ

(x

,

z

, v

)

X

.

However,

k0

K

holds in general and kK

0 can be arbitrarily large [18]. It is also worth noticing that (a)implies (a) but not necessarily vice versa and element

ξ

is less accurate and more difficult to find thanΦ(see the numerical example in [17]).

Assumption 2.2. There exists a continuous, strictly monotonically increasing function

ϕ :

(0

,

a

] →

(0

, ∞

) witha

F(x0)

2satisfying;

λlim0

ϕ

(

λ

)

=

0

sup

λ0

αϕ

(

λ

)

λ + α ≤ ϕ

(

α

)

, ∀ α ∈

(0

,

a

] .

(3)

there exists

v ∈

Xwith

∥ v ∥ ≤

1 such that x0

− ˆ

x

= ϕ

(F(x0)F(x0))

v.

It is known that for

α >

0

,

F(x0)(F(x)

yδ)

+ α

(x

x0)

=

0 (2.1)

has a unique solutionxδαinBr(x0) provided 0

<

r

<

k10 [17, Theorem 2.] (see also [19, Section 3]). Also it is known (cf. [17, Theorem 4]) that ifAssumptions 2.1and2.2are satisfied, then

xδα

− ˆ

x

∥ ≤

1 1

k0r

(

δ

√ α + ϕ

(

α

) )

.

(2.2)

Let

δ

0

>

0

,

a0

>

0 be some constants with

δ

02

<

a0and

x0

− ˆ

x

∥ ≤

r

.

Let

δ ∈

(0

, δ

0

]

and

α ∈ [ δ

2

,

a0

] .

Further, let

β,

qα,β

be parameters such that

β ≤

1

M2

+

a0 (2.3)

and

qα,β

=

1

− αβ +

3

β

M2k0

2 r

.

(2.4)

Remark 2.3.

1. Suppose 0

<

r

<

3M2α2k0

.

Then, we have

qα,β

=

1

− αβ +

3

β

M2k0

2 r

<

1

− αβ +

3

β

M2k0

2 2

α

3M2k0

=

1

,

i.e.,qα,β

<

1 for all 0

<

r

<

3M2α2k0

.

2. Notice that, if

α −→

0

,

thenr

−→

0 and in this casex0

= ˆ

x

.

Further, in practice, we choose

α

from a set

{

0

< α

0

< α

1

, . . . , α

N

<

1

}

(see Section4.2) and hencer

>

0

.

Hereafter, we assume that 0

<

r

<

min

{

2k1

0

,

3M2α2k0

} .

Theorem 2.4. Let xnbe as in(1.3)and let0

<

r

<

min

{

1

2k0

,

3M2α2k0

} .

Then for each

δ ∈

(0

, δ

0

] , α ∈ [ δ

2

,

a0

] ,

the sequence

{

xn

}

is in B2r(x0)and converges to xδαas n

→ ∞

. Further,

xn+1

xδα

∥ ≤

qnα,β+1

x0

xδα

∥ ,

(2.5)

where qα,βis as in(2.4).

Proof. Clearly,x0

B2r(x0)

.

LetAn

:=

1

0 F(xδα

+

t(xn

xδα))dt

.

Sincexδα

Br(x0)

,

A0is well defined. Assume that for some n

>

0

,

xn

B2r(x0) andAnis well defined. Then, sincexδαsatisfies Eq.(2.1), we have

xn+1

xδα

=

xn

xδα

− β [

F(x0)(F(xn)

F(xδα))

+ α

(xn

xδα)

]

=

xn

xδα

− β [

F(x0)An

+ α

I

]

(xn

xδα)

=

xn

xδα

− β [

F(x0)(An

F(x0))

]

(xn

xδα)

− β [

F(x0)F(x0)

+ α

I

]

(xn

xδα)

= [

I

− β

(F(x0)F(x0)

+ α

I)

]

(xn

xδα)

− β [

F(x0)(An

F(x0))

]

(xn

xδα)

.

(2.6)

UsingAssumptions 2.1, we have

xn+1

xδα

= [

I

− β

(F(x0)F(x0)

+ α

I)

]

(xn

xδα)

− β

F(x0)F(x0)

1

0

Φ(xδα

+

t(xn

xδα)

,

x0

,

xn

xδα)dt

.

(4)

Now sinceI

− β

(F(x0)F(x0)

+ α

I) is a positive self-adjoint operator,

I

− β

(F(x0)F(x0)

+ α

I)

=

sup

x∥=1

|⟨

(I

− β

(F(x0)F(x0)

+ α

I))x

,

x

⟩|

= |

sup

x∥=1

(1

− βα

)

x

,

x

⟩ − β ⟨

F(x0)F(x0)x

,

x

⟩|

1

− αβ.

(2.7)

The last step follows from relation

β |⟨

F(x0)F(x0)x

,

x

⟩| ≤ β ∥

F(x0)

2

≤ β

M2

1

M2

+ α

M2

=

1

− α

M2

+ α ≤

1

− βα.

Hence, byAssumption 2.1, we have

xn+1

xδα

∥ ≤

(1

− αβ

)

xn

xδα

∥ + β

M2k0

1

0

((1

t)

xδα

x0

∥ +

t

xn

x0

)dt

xn

xδα

(

1

− αβ + β

3k0M2r 2

)

xn

xδα

qα,β

xn

xδα

∥ .

(2.8)

Sinceqα,β

<

1 (seeRemark 2.3), we have

xn+1

xδα

∥ < ∥

x0

xδα

∥ ≤

r and

xn+1

x0

∥ ≤ ∥

xn+1

xδα

∥ + ∥

x0

xδα

∥ ≤

2r i.e.,xn+1

B2r(x0)

.

Also, for 0

t

1

,

xδα

+

t(xn+1

xδα)

x0

∥ = ∥

(1

t)(xδα

x0)

+

t(xn+1

xδα)

∥ <

2r

.

Hence,xδα

+

t(xn+1

xδα)

B2r(x0) andAn+1is well defined. Thus, by inductionxnis well defined and remains inB2r(x0) for eachn

=

0

,

1

,

2

, . . . .

By lettingn

→ ∞

in(1.3), we obtain the convergence ofxntoxδα

.

The estimate(2.5)now follows from (2.8). □

Remark 2.5.

1. IfAssumption 2.1is fulfilled only for allx

Br(x0)

Q

̸= ∅

, whereQis an convex closed a priori set, for whichx

ˆ ∈

Q

,

then we can modify method(1.3)by the following way

xδn+1

=

PQ(T(xδn))

to obtain the same estimate in the followingTheorem 2.4; herePQis the metric projection onto the setQandT is the step operator in(1.3).

2. Instead ofAssumption 2.1, if we use the following Lipschitz condition:

F(x1)

F(x2)

∥ ≤

L0

x1

x2

(2.9)

then from(2.6)and(2.7), one can prove that(2.5)holds withq

¯

α,β

:=

1

− αβ + β

ML0rinstead ofqα,β

,

provided 0

<

r

<

MLα0

.

3. Also by using(2.9)instead ofAssumption 2.1, one can prove(2.1)has a unique solution if 0

<

r

<

L0αand

xδα

− ˆ

x

∥ ≤

1 1

L0αr

(

δ

√ α + ϕ

(

α

) )

.

3. Finite dimensional realization of method(1.3)

For implementing method(1.3)one needs numerical calculations in finite dimensional spaces. One of the approaches in this regard is through discretization (see [1, page 63]). Here the regularization is achieved by a finite dimensional approximation alone. Regularization of ill-posed problems by projection methods can be found in the literature, for e.g in [20–23]. This section is concerned with the finite dimensional realization of the method(1.3). Precisely, our aim in this

(5)

section is to obtain an approximation forxδα

,

in the finite dimensional spaceR(Ph) ofX

.

Here

{

Ph

}

h>0is a family of orthogonal projections ofXontoR(Ph)

,

the range ofPh

.

For the results that follow, we impose the following conditions. Let

ϵ

h

:= ∥

F(x0)(I

Ph)

and

bh

:= ∥

(I

Ph)x

ˆ ∥ .

We assume that limh−→0

ϵ

h

=

0 and limh0bh

=

0

.

The above assumption is satisfied ifPh

Ipoint-wise and ifF(

·

) is compact operator. Further, we assume that there exist

ε

0

>

0

,

b0

>

0 and

δ

0

>

0 such that

ϵ

h

< ϵ

0andbh

<

b0

.

We have taken the discretized version of(1.3)as

xhk+1

=

xhk

− β

Ph

[

F(x0)(F(xhk)

yδ)

+ α

(xhk

xh0)

]

(3.1) wherexh0

=:

Phx0

.

Let

(

δ

0

+ ε

0)2

<

a

¯

0

.

Next we prove that, for

α >

0

PhF(x0)(FPh(x)

yδ)

+ α

Ph(x

x0)

=

0 (3.2)

has a unique solutionxhαinBr(x0)

R(Ph)

.

Theorem 3.1. Let

ˆ

x be a solution of(1.1),Assumption2.1satisfied and let F

:

D(F)

X

Y be Fréchet differentiable in a ball Br(x0)

R(Ph)

D(F)with0

<

r

<

2k10

.

Then(3.2)possesses a unique solution xhαin Br(x0)

R(Ph)

.

Proof. Forx

Br(x0)

R(Ph)

,

let Mh

=

1

0

F(x

ˆ +

t(x

− ˆ

x))dt

.

IfPhF(x0)MhPh

+ α

Iis invertible, then

(PhF(x0)MhPh

+ α

I)(x

Phx)

ˆ = α

Ph(x0

− ˆ

x)

+

PhF(x0)(yδ

y)

+

PhF(x0)Mh(I

Ph)x

ˆ

(3.3)

has a unique solutionxhα

R(Ph)

.

Observe that

F(Phx)

yδ

=

F(Phx)

F(x)

ˆ +

y

yδ

=

Mh(Phx

− ˆ

x)

+

y

yδ and hence

PhF(x0)(FPh(x)

yδ)

+ α

Ph(x

x0)

=

PhF(x0)(Mh(Phx

− ˆ

x)

+

y

yδ)

+ α

Ph(x

x0)

=

(PhF(x0)MhPh

+ α

I)Ph(x

− ˆ

x)

− α

Ph(x0

− ˆ

x)

PhF(x0)Mh(I

Ph)x

ˆ −

PhF(x0)(yδ

y)

.

Therefore by(3.3)PhF(x0)(FPh(x)

yδ)

+ α

Ph(x

x0)

=

0 has a unique solutionxhα

.

Clearly,xhα

Br(x0)

R(Ph)

.

So, it remains to show thatPhF(x0)MhPh

+ α

Iis invertible forx

Br(x0)

R(Ph)

.

Note that byAssumption 2.1, we have

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)(Mh

F(x0))Ph

=

sup

v∥≤1

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)(Mh

F(x0))Ph

v ∥

sup

v∥≤1

(PhF(x0)F(x0)Ph

+ α

Ph)1PhF(x0)

1

0

(F(

ˆ

x

+

t(x

− ˆ

x))

F(x0))dtPh

v

sup

v∥≤1

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)F(x0)

1

0

Φ(x

ˆ +

t(x

− ˆ

x)

,

x0

,

Ph

v

)dt

sup

v∥≤1

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)F(x0)

[

Ph

+

I

Ph

]

1

0

Φ(

ˆ

x

+

t(x

− ˆ

x)

,

x0

,

Ph

v

)dt

[

k0

+

k0

ε

h

√ α

] ∫ 1

0

∥ˆ

x

+

t(x

− ˆ

x)

x0

dt
(6)

[

k0

+

k0

ε

h

√ α

] ∫ 1

0

[

(1

t)

∥ˆ

x

x0

∥ +

t

x

x0

∥]

dt

k0 (

1

+ ε

h

√ α

)r

+

r

2

<

2k0r

<

1

.

Therefore,I

+

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)(Mh

F(x0))Phis invertible. Now from the relation PhF(x0)MhPh

+ α

I

=

(PhF(x0)F(x0)Ph

+ α

I)

[

I

+

(PhF(x0)F(x0)Ph

+ α

I)1PhF(x0)(Mh

F(x0))Ph

]

it follows thatPhF(x0)MhPh

+ α

Iis invertible. □

Theorem 3.2. Let xhnbe as in(3.1)and let0

<

r

<

min

{

2α

3M2k0

,

2k10

} .

Then for each

δ ∈

(0

, δ

0

] , α ∈

((

δ + ε

h)2

,

a

¯

0

] , ε

h

≤ ε

0

the sequence

{

xhn

}

is in B2r(x0)

R(Ph)and converges to xhαas n

→ ∞

. Further,

xhn+1

xhα

∥ ≤

qnα,β+1

Phx0

xhα

∥ ,

(3.4)

where qα,βis as in(2.4).

Proof. Sincexhαsatisfies Eq.(3.2), we have

xhn+1

xhα

=

xhn

xhα

− β [

PhF(x0)(F(xhn)

F(xhα))

+ α

Ph(xhn

xhα)

]

=

xhn

xhα

− β [

PhF(x0)Ahn

+ α

Ph

]

(xhn

xhα)

=

xhn

xhα

− β [

PhF(x0)(Ahn

F(x0))

]

(xhn

xhα)

− β [

PhF(x0)F(x0)Ph

+ α

Ph

]

(xhn

xhα)

= [

I

− β

(PhF(x0)F(x0)Ph

+ α

I)

]

(xhn

xhα)

− β [

PhF(x0)(Ahn

F(x0))

]

(xhn

xhα)

,

whereAhn

=:

1

0F(xhα

+

t(xhn

xhα))dt

.

UsingAssumption 2.1we have xhn+1

xhα

= [

I

− β

(PhF(x0)F(x0)Ph

+ α

I)

]

(xhn

xhα)

− β [

PhF(x0)F(x0)

]

1

0

Φ(xhα

+

t(xhn

xhα)

,

x0

,

(xhn

xhα))dt

.

Now sinceI

− β

(PhF(x0)F(x0)Ph

+ α

I) is a positive self-adjoint operator, as in(2.7)

I

− β

(PhF(x0)F(x0)Ph

+ α

I)

∥ ≤

1

− βα.

Hence,

xhn+1

xhα

∥ ≤

(1

− αβ

)

xhn

xhα

∥ + β

M2k0

1

0

((1

t)

xhα

x0

∥ +

t

xhn

x0

dt)

xhn

xhα

(

1

− αβ + β

3k0M2r 2

)

xhn

xhα

(

1

− αβ + β

3k0M2r 2

)

xhn

xhα

∥ .

The rest of the proof is analogous to the proof ofTheorem 2.4. □

Remark 3.3. Instead ofAssumption 2.1, if we use(2.9)thenTheorem 3.1holds with 0

<

r

<

L0α andTheorem 3.2holds withq

¯

α,β

:=

1

− αβ + β

ML0rinstead ofqα,β

,

provided 0

<

r

<

MLα0

.

4. Error bounds under source conditions Note that by(2.1), we have

PhF(x0)(F(xδα)

yδ)

+ α

Ph(xδα

x0)

=

0

.

(4.1)
(7)

So, by(3.2)and(4.1), we obtain

PhF(x0)(F(xhα)

F(xδα))

+ α

Ph(xhα

xδα)

=

0

.

That is,

(PhF(x0)F(x0)Ph

+ α

I)(xhα

Phxδα)

=

PhF(x0)(F(x0)

T)(xhα

xδα)

+

PhF(x0)F(x0)(I

Ph)xδα

,

whereT

=

1

0 F(xδα

+

t(xhα

xδα))dt

.

So,

xhα

Phxδα

=

(PhF(x0)F(x0)Ph

+ α

Ph)1

[

PhF(x0)(F(x0)

T)(xhα

xδα)

+

PhF(x0)F(x0)(I

Ph)xδα

]

(PhF(x0)F(x0)Ph

+ α

Ph)1 [

PhF(x0)

×

1

0

[

F(xδα

+

t(xhα

xδα))

F(x0)

]

dt(xhα

xδα) ]

+ ε

h

√ α ∥

xδα

(PhF(x0)F(x0)Ph

+ α

Ph)1 [

PhF(x0)F(x0)

[

Ph

+

I

Ph

]

×

1

0

ϕ

(xδα

+

t(xhα

xδα))

,

x0

,

xhα

xδα )

dt

+ ε

h

√ α ∥

xδα

k0

( 1

+ ε

h

√ α

) ∫ 1

0

[

(1

t)

xδα

x0

∥ +

t

xhα

x0

∥]

dt

xhα

xδα

∥ + ε

h

√ α

(

xδα

x0

∥ + ∥

x0

)

2k0r

xhα

xδα

∥ + ε

h

√ α

(r

+ ∥

x0

)

2k0r

[∥

xhα

Phxδα

∥ + ∥

(I

Ph)xδα

∥] + ε

h

√ α

(r

+ ∥

x0

)

,

hence

xhα

Phxδα

∥ ≤

1 1

2k0r

[

2k0r

(I

Ph)xδα

∥ + ε

h

√ α

(r

+ ∥

x0

) ]

.

(4.2)

Further, we observe that

Phx0

xhα

∥ ≤ ∥

Ph(x0

xhα)

∥ ≤

r

.

(4.3) Combining the estimates in(2.2),(4.2),(4.3)andTheorem 3.2we obtain the following:

Theorem 4.1. Let the assumptions inTheorem3.2hold and let xhnbe as in(3.2). Then

xhn

− ˆ

x

∥ ≤

qnα,βr

+

1 1

2k0r

[ bh

+ ε

h

√ α

(

x0

∥ +

r) ]

+

1

1

2k0r2 (

δ

√ α + ϕ

(

α

) )

.

(4.4)

Further if nδ

:=

min

{

n

:

qnα,β

<

δ+εαh

}

and bh

δ+εh

α then

xhnδ

− ˆ

x

∥ ≤

C

(

δ + ε

h

√ α + ϕ

(

α

) )

(4.5) where C

:=

r

+

1

12k0r

[

1

+

max

{

r

+ ∥

x0

∥ ,

2

}] .

Proof. By triangle inequality, we have

xhn<

Referensi

Dokumen terkait

In general, to solve nonlinear dierential equations, Adams–Bashforth–Moulton predictor–corrector method is the most popular and commonly used multi-step integration method for

In this paper, a partially asynchronous block Broyden method is presented for solving nonlinear systems of equations of the form F ( x ) = 01. Sucient conditions that guarantee

In this work we analyze two explicit methods for the solution of an inverse heat conduction problem and we confront them with the least-squares method, using for the solution of

In this note we discuss the relationship between the method of Lorentzen and Waadeland [1, 2] and the method of Waadeland [7–11] for accelerating the convergence of ordinary

In Section 2 we rewrite the preconditioned inexact Uzawa method (1.10) as a xed-point method, and generalize local convergence theory [16–18, 28] to nondierentiable problems..

Finally, we introduce and prove a strong convergence theorem of an iterative method for finding a common element of the solution of mixed equilibrium problem, the set of common fixed

Cordero, A., Hueso, J., Martinez, E., Torregrosa, J.R.: Increasing the convergence order of an iterative method for nonlinear systems.. Cordero, A., Torregrosa, J.R.: Variants of

Finite dimensional realization of an iterative regularization method for approximately solving the non-linear ill-posed Hammerstein type operator equationsKFx =f,is considered.. The