• Tidak ada hasil yang ditemukan

On convergence of regularized modified Newton's method for nonlinear ill-posed problems

N/A
N/A
Protected

Academic year: 2024

Membagikan "On convergence of regularized modified Newton's method for nonlinear ill-posed problems"

Copied!
15
0
0

Teks penuh

(1)

Article  in  Journal of Inverse and Ill-Posed Problems · May 2010

DOI: 10.1515/JIIP.2010.004

CITATIONS

23

READS

88 1 author:

Some of the authors of this publication are also working on these related projects:

ESIT-05 Modelación matemática aplicada a la ingenieríaView project Santhosh George

National Institute of Technology Karnataka 278PUBLICATIONS   666CITATIONS   

SEE PROFILE

(2)

On convergence of regularized modified Newton’s method for nonlinear ill-posed problems

Santhosh George

Abstract. In this paper we consider regularized modified Newton’s method for approxi- mately solving the nonlinear ill-posed problemF .x/ Dy;where the right hand side is replaced by noisy datayı 2 Y withky yık ı andF W D.F / X ! Y is a nonlinear operator between Hilbert spacesX andY: Under the assumption that Fréchet derivativeF0ofF is Lipschitz continuous, a choice of the regularization parameter and a stopping rule based on a majorizing sequence are presented. We prove that under a general source condition onx0 Ox;the errork Oxxk;˛ı kbetween the regularized approximation xk;˛ı (x0WDx0;˛ı ) and the solutionxOis of optimal order.

Keywords. Tihkonov regularization, regularized Newton’s method, balancing principle.

2000 Mathematics Subject Classification. 65J20, 65J15, 47J06.

1 Introduction

In this paper we consider nonlinear ill-posed problems

F .x/Dy; (1.1)

whereF WD.F / X !Y is a nonlinear operator with non-closed rangeR.F / and X; Y are infinite dimensional real Hilbert spaces with corresponding inner producth:; :iand normk:krespectively. We assume throughout that

Equation (1.1) has a solution xO (not necessarily unique), which in general does not depend continuously on the right hand side datay:

only noisy datayı 2Y with

kyyık ı (1.2)

are available.

F possesses a locally uniformly bounded Fréchet derivative F0.:/in a ball Br.x/O of radiusr aroundxO 2X:

Author's Copy

Author's Copy

(3)

Since (1.1) is ill-posed, one has to regularize (1.1). Tikhonov regularization (cf. [6–10, 20]) is one of the most widely used regularization method for solving linear and nonlinear ill-posed problems. In this method a regularized approxima- tionx˛ıis obtained by solving the minimization problem

x2D.F /min J˛.x/; J˛.x/D kF .x/yık2C˛kxx0k2 (1.3) with an initial guessx0 2Xand a properly chosen regularization parameter˛ >

0: It is known [21]that if x˛ı is an interior point of D.F /; then the regularized approximationx˛ısatisfies the Euler equation

F0.x/.F .x/yı/C˛.xx0/D0 (1.4) of Tikhonov functionalJ˛.x/:Here and belowF0.x/is the adjoint of the Fréchet derivativeF0.x/:

Convergence of a global minimizer of (1.3) was established in [7, 19, 21] and convergence rates under logarithmic source conditions onx0 Oxhave been estab- lished in [15].

Many authors considered iterative methods like Landweber’s method [4, 5, 11], iteratively regularized Gauss–Newtons method [3, 12, 13], etc. for solving (1.1).

In [15] fixed point iteration

x0DProjD.F /x0; xkC1DProjD.F /ˆ.xk/ (1.5) with ˆ.x/ D x .F0.x/F0.x/C˛I /1ŒF0.x/.F .x/yı/C˛.x x0/ where ProjD.F / is the projection (with respect to the norm on X) on D.F /has been considered and proved that xk converges to the stationary pointx˛ı of the Tikhonov functional J˛.x/: However no error estimate forkxk Oxkhas been given in [15]. In [16], a general sequence.z˛l/converging to the solutionx˛ıof the Tikhonov functionalJ˛.x/were considered and obtained estimate forkzl˛ Oxk;

under the following assumptions.

Assumption 1.1. There existsv2Xsuch that

x0 OxD'.F0.x/O F0.x//vO (1.6) where' W Œ0; ! RC; > kF0.x/kO 2;and the function‰./ D './1=2 is non- decreasing. Moreover'satisfies,

æ ˛

'.˛/ inf

˛

'./; 0 < ˛:

Author's Copy

Author's Copy

(4)

In this paper we consider a modified form called regularized modified Newton’s method defined iteratively by

xnC1;˛ı Dxn;˛ı .F0.x0/F0.x0/C˛I /1ŒF0.x0/.F .xn;˛ı /yı/C˛.xn;˛ı x0/;

(1.7) x0;˛ı WD x0 for solving (1.1). Note that the methods considered in [14, 19, 21], require the computation of the Fréchet derivative F0.:/at global minimizers x˛ı of the Tikhonov functionalJ˛.x/and the methods considered in [15] require the computation of the Fréchet derivative F0.:/at each iterationxn;˛ı converging to the global minimizer of the Tikhonov functionalJ˛.x/:

Observe that the method (1.7) requires the computation of Fréchet derivative F0.:/only at one pointx0:This is one of the advantage of the method considered in this paper. Another advantage of our method is that the stopping rule in this paper is based on a majorizing sequence (see [1]) and hence it is not depending on the method.

In Section 2 we provide some preparatory result and derive error bounds for kxı˛ Oxkunder certain general source conditions which include the logarithmic source conditions consider in [15]. In Section 3 we consider the regularized modi- fied Newton’s method defined in (1.7) and prove thatxn;˛ı converges to the solution x˛ı of (1.2). The analysis of this section is based on a majorizing sequence. Also in this section, using an error estimate forkx˛ı Oxk, we obtain an error estimate forkxn;˛ı Oxk:In Section 4 we derive error bounds forkxn;˛ı Oxkby choosing the regularization parameter˛by an a priori as well as by the balancing principle pro- posed by Pereverzev and Schock [18]. Algorithm for implementing the balancing principle and the stopping rule for the iteration are given in Section 5 and finally the paper ends with some concluding remarks in Section 6.

2 Preparatory results

Throughout this paper we assume that the operatorF satisfies the following as- sumptions.

Assumption 2.1. There existsr > 0such thatBr.x/O D.F /andF is Fréchet differentiable at allx2Br.x/:O

Assumption 2.2. There exists a constantk0> 0such that for everyx; u2Br.x/O andv2X;there exists an elementˆ.x; u; v/ 2Xsatisfying

ŒF0.x/F0.u/vDF0.u/ˆ.x; u; v/;kˆ.x; u; v/k k0kvkkxuk for allx; u2Br.x/O andv2X:

Author's Copy

Author's Copy

(5)

The next assumption on source condition is based on a source function'and a property of the source function':We will use this assumption to obtain an error estimate fork Oxx˛ık:

Assumption 2.3. There exists a continuous, strictly monotonically increasing function'W.0; a!.0;1/witha kF0.x/O F0.x/kO satisfying

lim!0'./D0

for˛1,'.˛/˛

sup0˛'./ c''.˛/; 82.0; a

there existsv2Xsuch that

x0 OxD'.F0.x/O F0.x//v:O (2.1) Remark 2.4. The above asumption on source function'includes the logarithmic source condition considered in [15].

We will be using the following theorems from [21] for our error analysis.

Theorem 2.5 (cf. [21], Theorem 2.7). Let xı˛ be the solution of the regularized problem (1.4) andx˛ be a solution of (1.4) withyıreplaced by the exact datay. Assume Assumption 2.2 with radiusr D pı˛ C2kx0 Oxk:Ifk0kx0 Oxk < 1;

then

kx˛ıx˛k ı p˛p

1k0kx0 Oxk: (2.2) The following theorem is essentially a reformulation of Theorem 2.6 proved in [21]. For the sake of completion, we supply its proof as well.

Theorem 2.6. Letx˛be a solution (1.4) withyıreplaced by the exact datay. As- sume Assumption 2.2 and Assumption 2.3 with radiusr D2kx0 Oxk:If2k0kx0

O

xk< 1;then

kx˛ Oxk c''.˛/kvk

p12k0kx0 Oxk: (2.3)

Author's Copy

Author's Copy

(6)

Proof. Note thatF0.x˛/.F .x˛/y/C˛.x˛x0/D0;so .F0.x/O F0.x/O C˛I /.x˛ Ox/

D.F0.x/O F0.x/O C˛I /.x˛ Ox/F0.x˛/.F .x˛/y/˛.x˛ x0/ D˛.x0 Ox/CF0.x/O F0.x/.xO ˛ Ox/F0.x˛/.F .x˛/y/

D˛.x0 Ox/CF0.x/O ŒF0.x/.xO ˛ Ox/.F .x˛/y/

ŒF0.x˛/F0.x/O .F .x˛/y/

D˛.x0 Ox/CF0.x/O ŒF0.x/.xO ˛ Ox/.F .x˛/F .x//O ŒF0.x˛/F0.x/O .F .x˛/F .x//:O

Thus

x˛ OxDs1Cs2Cs3 (2.4)

where

s1WD˛.F0.x/O F0.x/O C˛I /1.x0 Ox/;

s2WD .F0.x/O F0.x/O C˛I /1ŒF0.x˛/F0.x/O .F .x˛/F .x//;O and

s3WD.F0.x/O F0.x/O C˛I /1F0.x/O ŒF0.x/.xO ˛ Ox/.F .x˛/F .x//:O It follows from estimates (2.15) and (2.17) in [21] (also see estimate (A.7) in [14]), that

ks2k k0kx0 Oxkkx˛ Oxk and

ks3k k0kx0 Oxkkx˛ Oxk:

Therefore by (2.4), to complete the proof it is enough to prove that ks1k c''.˛/kvk:But by Assumption 2.3, we have

ks1k D k˛.F0.x/O F0.x/O C˛I /1'.F0.x/O F0.x//vkO c''.˛/kvk:

This completes the proof of the theorem.

Author's Copy

Author's Copy

(7)

3 Regularized modified newton’s method

In this section we prove thatxn;˛ı converges tox˛ıand provide an error estimate for kxın;˛xı˛k:Our analysis is based on a majorizing sequence. Recall (see [1], Def- inition 1.3.11) that a nonnegative sequence.tn/is said to be a majorizing sequence of a sequence.xn/inX if

kxnC1xnk tnC1tn; 8n0:

We use the sequence.tn/; n0;given byt0D0; t1D; tnC1DtnC k0

.1 Qr/.tntn1/ (3.1)

whererQ 2Œ0; 1/;as a majorizing sequence, of the sequence.xın;˛/:The following lemma is essentially a reformulation of a lemma in [8].

Lemma 3.1. Assume there exist nonnegative numbers k0; and rQ 2 Œ0; 1/such

that k0

.1 Qr/ Qr: (3.2)

Then the sequence.tn/defined in (3.1) is increasing, bounded above byt WD

1Qr;and converges to sometsuch that0 < t 1Qr:Moreover, forn0, 0tnC1tn Qr.tntn1/ Qrn; (3.3) and

ttn rQn

1 Qr: (3.4)

Proof. Since the result holds for D0; k0 D 0orr D 0;we assume thatk0 ¤ 0; ¤0andrQ ¤0:Observe thatt1t0D0;assume thattiC1ti 0;for alli kfor somek:ThentkC2tkC1D .1Q0r/.tkC1tk/0;so by induction tnC1tn0for alln0:Now since

k0 .1 Qr/ Qr

the estimate (3.3) follows from (3.1). Further observe that

tkC1tkC Qr.tktk1/ C QrC C Qrk D 1 QrkC1

1 Qr <

1 Qr:

Author's Copy

Author's Copy

(8)

Hence the sequence.tn/; n 0 is bounded above by 1QrInondecreasing, so it converges to somet 1Qr;and

ttnD lim

i!1tnCitn lim

i!1 i1X

jD0

.tnC1Cj tnCj/ rQn 1 Qr: That completes the proof of the lemma.

To prove the convergence of the sequence.xn;˛ı /defined in (1.7) we introduce the following notations: LetR˛.x0/WDF0.x0/F0.x0/C˛I and

G.x/WDxR˛.x0/1ŒF0.x0/.F .x/yı/C˛.xx0/: (3.5) Note that with the above notation,G.xn;˛ı /DxnC1;˛ı and

kR˛.x0/1F0.x0/F0.x0/k 1: (3.6) The following lemma based on the Assumption 2.2 will be used in due course.

Lemma 3.2. Foru; v2Br.x0/

F .v/F .u/F0.x0/.vu/DF0.x0/ Z 1

0 ˆ.uCt.vu/; x0; vu/dt:

Proof. Using the fundamental theorem of integration, foru; v2Br.x0/we have F .v/F .u/D

Z 1

0 F0.uCt.vu//.vu/dt so by Assumption 2.2 we have

F .v/F .u/F0.x0/.vu/DF0.x0/ Z 1

0 ˆ.uCt.vu/; x0; vu/dt:

This completes the proof of the lemma.

Hereafter we assume that r max

° ı

p˛ C2kx0 Oxk; t± :

Author's Copy

Author's Copy

(9)

Theorem 3.3. Let the assumptions in Lemma 3.1 with D kF .xp0/y˛ ık and As- sumption 2.2 be satisfied. Then the sequence.xn;˛ı /defined in (1.7) is well defined andxn;˛ı 2Bt.x0/for alln0:Further.xn;˛ı /is Cauchy sequence inBt.x0/ and hence converges tox˛ı 2Bt.x0/ Bt.x0/andF0.x0/.F .x˛ı/yı/C

˛.x˛ıx0/D0:

Moreover, the following estimates hold for alln0;

kxnC1;˛ı xn;˛ı k tnC1tn; (3.7) and

kxn;˛ı xı˛k rQnkF .x0/yık

p˛.1 Qr/ : (3.8)

Proof. LetGbe as in (3.5). Then foru; v2Bt.x0/;

G.u/G.v/DuvR˛.x0/1ŒF0.x0/.F .u/yı/C˛.ux0/ CR˛.x0/1ŒF0.x0/.F .v/yı/C˛.vx0/ DR˛.x0/1ŒR˛.x0/.uv/F0.x0/.F .u/F .v//

C˛R˛.x0/1.vu/

DR˛.x0/1F0.x0/ŒF0.x0/.uv/.F .u/F .v//C˛.uv/

C˛R˛.x0/1.vu/

DR˛.x0/1F0.x0/ŒF0.x0/.uv/.F .u/F .v//

Thus by Lemma 3.2, Assumption 2.2 and (3.6) we have

kG.u/G.v/k k0tkuvk: (3.9) Now we shall prove that the sequence.tn/where.tn/defined in Lemma 3.1 is a majorizing sequence of the sequence.xn;˛ı /andxn;˛ı 2Bt.x0/;for alln0:

Note thatkx1;˛ı x0k D kR˛.x0/1F0.x0/.F .x0/yı/k kF .xp0/y˛ ık D Dt1t0;assume that

kxiC1;˛ı xi;˛ı k tiC1ti; 8i k (3.10) for somek:Then

kxkC1;˛ı x0k kxıkC1;˛xk;˛ı k C kxk;˛ı xık1;˛k C C kx1;˛ı x0k tkC1tkCtktk1C Ct1t0

DtkC1t:

Author's Copy

Author's Copy

(10)

SoxiC1;˛ı 2Bt.x0/for allik;and hence, by (3.9) and (3.10), kxıkC2;˛xkC1;˛ı k k0tkxkC1;˛ı xık;˛k k0

.1 Qr/.tkC1tk/DtkC2tkC1: Thus by inductionkxnC1;˛ı xn;˛ı k tnC1tnfor alln0and hence.tn/; n0 is a majorizing sequence of the sequence.xn;˛ı /:In particularkxın;˛x0k tn t;i.e.,xn;˛ı 2Bt.x0/;for alln0:So.xn;˛ı /; n0is a Cauchy sequence and converges to somex˛ı 2Bt.x0/Bt.x0/and

kx˛ıxn;˛ı k ttn rQn

.1 Qr/ D rQnkF .x0/yık p˛.1 Qr/ :

Now by lettingn! 1in (1.7) we obtainF0.x0/.F .x˛ı/yı/C˛.x˛ıx0/D0:

This completes the proof of the theorem.

4 Error bounds

Combining the estimates in Theorem 2.5, Theorem 2.6 and Theorem 3.3 we obtain the following,

Theorem 4.1. Let xın;˛ be as in (1.7) and let the assumptions in Theorem 2.5, Theorem 2.6 and Theorem 3.3 be satisfied. Then

kxn;˛ı Oxk kF .x0/yık .1 Qr/

rQn

p˛C p 1

1k0kx0 Oxk pı

˛ C c'kvk

p12k0kx0 Oxk'.˛/: (4.1) Let

nı WDmin¹nW Qrn ıº (4.2)

and let C WDmax

²kF .x0/yık

1 Qr Cp 1

1k0kx0 Oxk

; c'kvk p12k0kx0 Oxk

³ : (4.3) Theorem 4.2. Letxn;˛ı be as in (1.7),nı be as in (4.2) andC be as in (4.3). Let the assumptions in Theorem 4.1 be satisfied. Then

kxnıı Oxk C ı

p˛C'.˛/

(4.4)

Author's Copy

Author's Copy

(11)

4.1 A priori choice of the parameter

Note that, if the regularization parameter˛ı WD˛.ı/satisfies

ı'.˛ı/Dı; (4.5)

then the error pı

˛ C'.˛/is of optimal order.

Let ./ WD p

'1./; 0 < a: Then by (4.5), ı D .'.˛ı//; so

˛ı D '1.1.ı//: Hence for the a priori choice˛ D '1.1.ı// we have the following theorem.

Theorem 4.3. Let./ Dp

'1./for0 < a;and assumptions in Theo- rem 4.2 holds. Forı > 0;let˛D'1.1.ı//and letnıbe as in (4.2). Then

kxnıı Oxk DO.1.ı//:

The disadvantage of the above a priori parameter choice is that, in practice an index function ' describing a solution smoothness is usually unknown. So one has to apply a posteriori rules which choose ˛ from quantities that arise during calculations. The well-known a posteriori rules that have been used for Tikhonov regularization of nonlinear ill-posed problems are the discrepancy principle [2,22], the rules considered in [15, 21] and the balancing principle considered in [18]. In the next subsection we considere the balancing principle considered in [18] for choosing the regularization parameter˛in (1.7).

4.2 An adaptive choice of the parameter

In this subsection, we will present the balancing principle studied in [16–18] for choosing the parameter˛in (1.7).

In the balancing principle, the regularization parameter˛is selected from some finite set

DM.˛/WD ¹˛iDi˛0; iD0; 1; : : : ; Mº (4.6) where > 1andMis such that1˛M:We choose˛0WDı;because we expect to have an accuracy of order at leastO.p

ı/and from Theorem 4.2, it follows that such an accuracy cannot be guaranteed for˛ < ı:

Letxi WDxnııi:Then by (3.8) and (4.2) we have kxix˛ıik kF .x0/yık

1 Qr pı

˛i; 8i D0; 1; : : : ; M: (4.7) The parameter choice strategy that we are going to consider in this paper, we selects ˛ D ˛i from DM.˛/ and operates only with corresponding xi; i D0; 1; : : : ; M:

Author's Copy

Author's Copy

(12)

Theorem 4.4. Assume that there exists i 2 ¹0; 1; 2; : : : ; Mº such that '.˛i/

pı˛i:Let assumptions of Theorem 4.2 be satisfied and let

lWDmax

²

iW'.˛i/ ı p˛i

³

< M;

kWDmax

²

iW kxixjk 4C ı

j; j D0; 1; 2; : : : ; i

³

(4.8) whereC is as in (4.3). Thenlkand

k Oxxkk c 1.ı/

wherecD6Cp :

Proof. To see thatl k;it is enough to show that, for eachi 2 ¹1; 2; : : : ; Mº;

'.˛i/ ı

i H) kxixjk 4C ı

j; 8j D0; 1; : : : ; i:

Forj i;by (4.4) we have

kxixjk kxixk C kxxjk C

'.˛i/C ı p˛i

CC

'.˛j/C ı p˛j

4C ı p˛j:

Thus the relationl kis proved. Next we observe that k Oxxkk k Oxxlk C kxlxkk

C

'.˛l/C ı p˛l

C4C ı p˛l 6C ı

l: Now since˛ı ˛lC1˛l;it follows that

˛l p

ı

ı Dp

'.˛ı/Dp 1 .ı/:

This completes the proof of the theorem.

Author's Copy

Author's Copy

(13)

5 Implementation of adaptive choice rule

In this section we provide an algorithm for the determination of a parameter ful- filling the balancing principle (4.8) and also provide a starting point for the itera- tion (3.3) approximating the unique solutionx˛ı of (1.3). We choose the starting pointx0such thatx02D.F /andkF .x0/yık 4kpı0:

The choice of the stopping indexnıinvolves the following steps:

Choose˛0Dıand > 1:

Chooser > 0Q such thatr <Q 12 1

r

1 4k0kF .xpı0/yık :

Choose the parameter˛M DM˛0big enough but not too large.

Choosenısuch thatnıDmin¹nW Qrn ıº:

Finally the adaptive algorithm associated with the choice of the parameter speci- fied in Theorem 4.4 involves the following steps:

5.1 Algorithm

Seti 0

solvexi WDxınıiby using the iteration (3.3).

Ifkxixjk> 4C

pı

pj; j i;then takek Di1:

Seti DiC1and return to step 2.

6 Concluding remarks

In this paper we have considered a regularized modified Newton’s method for obtaining approximate solutions for a nonlinear operator equation F .x/ D y;

when the available data isyıin place of the exact datay:The procedure involves finding the fixed point of the function

G.x/Dx.F0.x0/F0.x0/C˛I /1ŒF0.x0/.F .x/yı/C˛.xx0/ in an iterative manner.

It should of course be mentioned that the method requires to compute the Fréchet derivativeF0.:/only at one pointx0unlike the method considered in [15].

Also it should be noted that the proof for the convergence result and the stopping rule are based on a majorizing sequence, which is independent of the method. For

Author's Copy

Author's Copy

(14)

choosing the regularization parameter˛we made use of the balancing principle suggested in [18].

In a future work, it is envisaged to investigate the method considered in [15]

to obtain quadratic convergence of the iterate to the solution x˛ı of the Tikhonov functionalJ˛.x/:

Acknowledgments.The author thanks National Institute of Technology Karnataka, India, for the financial support under seed money grant No.RGO/O.M /SEED GRANT/106/2009.

Bibliography

[1] I. K. Argyros, Convergenve and Applications of Newton-type Iterations, Springer 2008.

[2] A. Bakushinsky and A. Smirnova, On application of generalized discrepancy princi- ple to iterative methods for nonlinear ill-posed problems, Numer. Func. Anal. Optim.

26 (2005), 35-48.

[3] B. Blaschke, A. Neubauer and O. Scherzer, On convergence rates for the iteratively regularized Gauss–Newton method, IMA J. Numer. Anal. 17 (1997), 421–436.

[4] P. Deuflhard, H. W. Engl and O. Scherzer, A convergenve analysis of iterative meth- ods for the solution of nonlinear ill-posed problems under affinely invariant condi- tions, Inverse probl., 14 (1998), 1081–1106.

[5] H. W. Engl, Regularization methods for the stable solution of inverse problems, Surv.

Math. Ind. (1993), 371–143.

[6] H. W. Engl, M. Hanke and A. Neubauer, Regularization Inverse Problems, Dor- drecht, Kluwer, 1996.

[7] H. W. Engl, K. Kunisch and A. Neubauer, Convergence rates for Tikhonov regular- ization of nonlinear ill-posed problems, Inverse Probl. 5 (1989), 523–40.

[8] S. George and A. I. Elmahdy, An analysis of Lavrentiev regularization for nonlin- ear ill-posed problems using an iterative regularization method, Int. J. Compt. Appl.

Math. (2010), to appear.

[9] R. Gorenflo and B. Hofmann, On autoconvolution and regularization. Inverse Probl.

10 (1994), 353–73.

[10] C. W. Groetsch, The Theory of Tikhonov Regularization for Fredholm Equations of the First Kind, Boston, MA,Pitman (1984).

[11] M. Hanke, A. Neubauer and O. Scherzer, A convergence analysis of the Landweber iteration for nonlinear ill-posed problems, Numer. Math. 72 (1995), 21–37.

[12] J. Qi-Nian, Error estimates of some Newton-type methods for solving nonlinear in- verse problems in Hilbert scales. Inverse Probl., 16 (2000), 187–197.

Author's Copy

Author's Copy

(15)

[13] J. Qi-Nian, On the iteratively regularized Gauss-Newton method for solving nonlin- ear ill-posed problems, Math. Comput. 69:232 18(2000), 1603–1623.

[14] J. Qi-Nian and H. Zong-yi, On an a posteriori parameter choice strategy for Tikhonov regularization of nonlinear ill-posed problems, Numer. Math. 83 (1999), 139–159.

[15] B. Kaltenbacher, A note on logarithmic convergence rates for nonlinear Tikhonov regularization, J. Inv. Ill-Posed Probl., 16 (2008), 79–88.

[16] S. Lu, S. V. Pereverzev and R. Ramlau, An analysis of Tikhonov regularization for nonlinear ill-posed problems under a general smoothness assumption, Inverse Probl.

23 (2007), 217–230.

[17] P. Mathe and S. V. Perverzev, Geometry of linear ill-posed problems in variable Hilbert scales, Inverse Probl. 19 (2003), 789–803.

[18] S. V. Perverzev and E. Schock, On the adaptive selection of the parameter in regular- ization of ill-posed problems, SIAM J. Numer. Anal. 43 (2005), 2060–2076.

[19] T. I. Seidman and C. R. Vogel, Well-posedness and convergence of some regulariza- tion methods for nonlinear ill-posed problems, Inverse Probl. 5 (1989), 227–238.

[20] U. Tautanhahn, On the method of Lavrentiev regularization for nonlinear ill-posed problems, Inverse Probl. 18 (2002), 191–207.

[21] U. Tautenhahn and Q.-N. Jin, Tikhonov regularization and a posteriori rules for solv- ing nonlinear ill-posed problems, Inverse Probl. 19 (2003), 1–21.

[22] A. N. Tikhonov, A. S. Leonov and A. G. Yagola, Nonlinear ill-posed problems, Chap- man & Hall, London, 1997.

Received December 7, 2009.

Author information

Santhosh George, Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Surathkal, Mangalore 575025, India.

E-mail:[email protected]

Author's Copy

Author's Copy

Referensi

Dokumen terkait