Contents lists available atScienceDirect
Applied Mathematics Letters
www.elsevier.com/locate/aml
Improved semi-local convergence of the Newton-HSS method for solving large systems of equations
Ioannis K. Argyros
a, Santhosh George
b, Alberto Magreñán
c,∗aCameron University, USA
bNIT Karnataka, India
cUniversidad de La Rioja, Spain
a r t i c l e i n f o
Article history:
Received 14 March 2019
Received in revised form 29 April 2019 Accepted 29 April 2019
Available online 18 May 2019
Keywords:
Newton-HSS method
Systems of nonlinear equations Semi-local convergence
a b s t r a c t
The aim of this article is to present the correct version of the main theorem 3.2 given in Guo and Duff (2011), concerning the semi-local convergence analysis of the Newton-HSS (NHSS) method for solving systems of nonlinear equations. Our analysis also includes the corrected upper bound on the initial point.
©2019 Published by Elsevier Ltd.
1. Introduction
Numerous problems in computational disciplines can be reduced to solving a system of nonlinear equations withnequations in nvariables like
F(x) = 0 (1.1)
using Mathematical Modeling [1–4]. Here, F is a nonlinear continuously differentiable mapping defined on a convex subset Ω of the n-dimensional complex linear space Cn into Cn. The Jacobian matrixF′(x) is sparse, non symmetric and positive definite. Most solution methods for solving equation F(x) = 0 are iterative, since a closed form solutionx∗can be obtained only in special cases. In the rest of the paper, we use well established and standard notation for these methods and results [3–7]. The most popular methods for generating a sequence approximating x∗ are undoubtedly the inexact Newton (IN) methods [1,2,5,6,8–14]:
Newton’s method (NM) given for eachk= 0,1,2, . . .by
x0∈Ω, xk+1=xk−F′(xk)−1F(xk)
∗ Corresponding author.
E-mail addresses: [email protected](I.K. Argyros),[email protected](S. George), [email protected](A. Magreñán).
https://doi.org/10.1016/j.aml.2019.04.032 0893-9659/©2019 Published by Elsevier Ltd.
is the most popular method for solving Eq.(1.1). In particular, one solves the system of linear equations
F′(xk)rk =−F(xk) (1.2)
to find
xk+1=xk+rk. System(1.2)is usually denoted in a more general form as
Ax=b. (1.3)
Two types of methods are used for solving system(1.3)[15]. The first type is the so called Krylov methods.
Then, the Krylov subspace iteration is called an inner iteration. Moreover, the nonlinear iteratexk is called an outer iteration. Jacobi, Successive Over-Relaxation (SOR), Gauss–Siedel, Accelerated Over-Relaxation (AOR) and Krylov subspace methods are the most used inner iteration methods [3,4]. The most popular outer iteration methods are the Conjugate Gradient (CG) and GMRES. If Krylov subspace methods are employed, then we say that we use Newton–Krylov subspace methods (NKSM). Newton-CG and Newton- GMRES methods use CG and GMRES as outer iterations. Recently, there is an increased interest to provide efficient splitting ofAfor linear as well as nonlinear systems. Iterative Hermitian/ skew-Hermitian schemes (HSS) for non-Hermitian positive definite linear systems were given by Bai. et al. in [3,4]. In order to improve the robustness of HSS, some algorithms were also introduced based on HSS. In the case of weighted Toeplitz least squares problems a preconditioning algorithm based on HSS was reported in [16]. In [17] Li et al.
introduced an asymmetric Hermitian/ske-Hermitian(AHSS) algorithm for large and sparse non-Hermitian positive definite systems of linear equations. Later, in [18,19] Li et al. presented a variation of the HSS algorithm called the Lopsided-HSS (LHSS) algorithm. Then, Bai et al. [20] presented the Picard-HSS as well as the nonlinear HSS-like algorithms to solve large scale systems of weakly nonlinear equations. Moreover, Bai et al. in [4] used Newton-Hss algorithms systems with positive definite Jacobian matrices. Another variation of HSS for non-Hermitian positive definite systems was reported by Li et al. [18]. Furthermore, Zhu in [21] introduced another class of LHSS methods to solve large sparse systems. Returning back to the solution of(1.3), letA=B and use methods
Bxm=Cxm−1+b, m= 0,1,2, . . . . (1.4) Suppose that these methods have initial point 0 and consider them as inner iterations. Then, we introduce the inner–outer iteration.
x0∈Ω, xk+1=xk−(Tklk−1+· · ·+Tk+I)Bk−1F(xk)
Tk =Bk−1Ck, (1.5)
F′(xk) =Bk−Ck, k= 0,1, . . . ,
where lk is the number of inner iteration steps. Bai et al. in [3,4] proved that this method converges to the unique solution of the system of linear equations unconditionally and has the same upper bounds for the rate of convergence as the CG method. Numerical examples on two-dimensional nonlinear convection–
diffusion equations showed that the Newton-HSS method outperforms in the sense of number of iterations and CPU time other Newton-GCG, Newton-USOR and Newton-GMRES methods [17–19]. In [6] the semi- local convergence of the Newton-HSS method was shown, which guarantees that the sequence generated by Algorithm NHSS converges to the solution of(1.1)under reasonable hypotheses. Therefore, any initial point can be tested to be or not to be suitably by verifying the semi-local convergence criteria. Unfortunately the criteria given in [6] are not correct and the proof of Theorem 3.2 breaks down (see Remark 2.2). That is there is no guarantee that algorithm NHSS converges. Because of the improvement of NHSS, we revised the
proof of Theorem 3.2 (see ourTheorem 2.1) and provided the correct sufficient convergence criterion. Notice that we also present NHSSB and Newton-HSS method with some backtracking strategy and show global convergence two forcing terms.
The algorithm for IN is as follows;
Algorithm IN [8]
• Givenx0and a positive constant (tolerance denoted bytol) tol.
• Fork= 0,1,2, . . .until∥F(xk)∥ ≤tol∥F(x0)∥ do:
–For a given ηk∈(0,1) findsk such that
∥F(xk) +F′(xk)sk∥< ηk∥F(xk)∥.
–Setxk+1=xk+sk.
For large system with sparse non-Hermitian and positive-definiteA, the HSS iteration method for linear equationAx=b is given by
Algorithm HSS[3]
• Given an initial guessx0and positive constants tol.
• SplitAinto its Hermitian partH and its skew-Hermitian partS H =1
2(A+A∗) andS=1
2(A−A∗)
• Forℓ= 0,1,2, . . .until∥b−Axℓ∥ ≤tol∥b−Ax0∥, computexℓ+1 by (αI+H)xℓ+1/2 = (αI−S)xℓ+b
(αI+S)xℓ+1/2 = (αI−H)xℓ+1/2+b.
Next, we present a Newton-HSS algorithm to solve large systems of nonlinear equations with a positive- definite Jacobian matrix:
Algorithm NHSS (the Newton-HSS method[4])
• Given an initial guessx0, positive constants tol, and a positive integer sequence {ℓk}∞k=0.
• Fork= 0,1,2, . . .until∥F(xk)∥ ≤tol∥F(x0)∥ do:
–Setdk,0:= 0.
–Forℓ= 0,1,2, . . . , ℓk−1 apply Algorithm HSS:
(αI+H(xk))dk,ℓ+1/2 = (αI−S(xk))dk,ℓ−F(xk),
(αI+S(xk))dk,ℓ+1/2 = (αI−H(xk))dk,ℓ+1/2−F(xk). (1.6) and obtaindk,ℓk such that
∥F(xk) +F′(xk)dk,ℓk∥< ηk∥F(xk)∥for someηk∈[0,1), (1.7) where
H(xk) =1
2(F′(xk) +F′(xk)∗) andS(xk) = 1
2(F′(xk)−F′(xk)∗) (1.8) are the Hermitian and skew-Hermitian parts of the Jacobian matrixF′(xk), respectively.
–Set
xk+1=xk+dk,ℓk. (1.9)
Note thatηk is varying in each iterative step, unlike a fixed positive constant value used in [4]. Further observe that ifdk,ℓk in(1.9)is given in terms ofdk,0, we get
dk,ℓk = (I−Tkℓ)(I−Tk)−1Bk−1F(xk) (1.10) whereTk:=T(α, k), Bk :=B(α, k) and
T(α, x) =B(α, x)−1C(α, x) B(α, x) = 1
2α(αI+H(x))(αI+S(x)) (1.11)
C(α, x) = 1
2α(αI−H(x))(αI−S(x)).
Using the above expressions forTk anddk,ℓk, we can write the Newton-HSS in(1.9)as
xk+1=xk−(I−Tkℓ)−1F(xk)−1F(xk). (1.12) A Kantorovich-type semi-local convergence analysis was presented in [6] for NHSS. However, the semi- local convergence criterion (15) in [6] is not correct (seeRemark 2.2). Consequently, Theorem 3.2 in [6] is not correct as well as all subsequent results based on (15). Moreover, the upper bound functiong3 (to be defined later) on the norm of the initial point is not the best that can be used under the conditions given in [6]. In this article, we present the correct version of Theorem 3.2 in [6] by providing the correct convergence criterion corresponding to (15) in [6] as well as the correct bound function ¯g3( to be defined later). Moreover, a corrected comparison is given with the “g” functions appearing in earlier works [6].
2. Semi-local convergence analysis
The semi-local convergence of NHSS is based on the conditions (A). Letx0∈CnandF :Ω⊂Cn −→Cn beG−differentiable on an open neighborhoodΩ0 ⊂Ω on whichF′(x) is continuous and positive definite.
SupposeF′(x) =H(x) +S(x) whereH(x) andS(x) are as in(1.8)withxk=x.
(A1) There exist positive constantsβ, γandδsuch that
max{∥H(x0)∥,∥S(x0)∥} ≤β, ∥F′(x0)−1∥ ≤γ, ∥F(x0)∥ ≤δ, (2.1) (A2) There exist nonnegative constantsLh andLs such that for allx, y∈U(x0, r)⊂Ω0,
∥H(x)−H(y)∥ ≤ Lh∥x−y∥
∥S(x)−S(y)∥ ≤ Ls∥x−y∥. (2.2) Next, we present the corrected version of Theorem 3.2 in [6].
Theorem 2.1. Assume that conditions (A) hold with the constants satisfying
δγ2L≤¯g3(η) (2.3)
whereg¯3(t) :=2(2+t+2t(1−t)22−t3),η= max{ηk}<1, r= max{r1, r2}with
r1 = α+β L
(√
1 + 2ατ θ
(2γ+γτ θ)(α+β)2 −1 )
r2 = b−√
b2−2ac
a (2.4)
a= γL(1 +η)
1 + 2γ2δLη, b= 1−η, c= 2γδ,
and withℓ∗= lim infk−→∞ℓk satisfyingℓ∗ >⌊ln(τ+1)θlnη ⌋, (Here⌊.⌋represents the largest integer less than or equal to the corresponding real number)τ∈(0,1−θθ )and
θ≡θ(α, x0) =∥T(α, x0)∥<1. (2.5) Then, the iteration sequence {xk}∞k=0 generated by the Algorithm NHSS is well defined and converges to x∗, so thatF(x∗) = 0.
Proof . Simply follow the proof of Theorem 3.2 in [6] but use function ¯g3 instead of functiong3 defined in the following remark.
Remark 2.2. The corresponding result in [6] used the function bound g3(t) = 1−t
2(1 +t2) (2.6)
in (2.3). That is, they used instead of(2.3)
δγ2L≤g3(η). (2.7)
However, condition(2.7)does not necessarily implyb2−2ac≥0, which means thatr2does not necessarily exist and the proof of Theorem 3.2 in [6] breaks down. That is there is no guarantee that under (2.7) Algorithm NHSS converges. Notice that, our condition (2.3)is equivalent to b2−2ac ≥ 0. We also have that
¯
g3(t)< g3(t) for eacht≥0, (2.8) so (2.3)implies(2.7)but not necessarily vice versa. Hence, our version of Theorem 3.2 is correct, i.e., Algo- rithm NHSS converges under(2.3).
3. Function bounds
The semi-local convergence of inexact Newton methods was presented in [22] under the conditions
∥F′(x0)−1F(x0)∥ ≤β,
∥F′(x0)−1(F′(x)−F′(y))∥ ≤γ∥x−y∥,
∥F′(x0)−1sn∥
∥F′(x0)−1F(xn)∥ ≤ηn
and
βγ≤g1(η), where
g1(η) =
√(4η+ 5)3−(2η3+ 14η+ 11) (1 +η)(1−η)2 . More recently, Shen and Li [7] substituted g1(η) withg2(η), where
g2(η) = (1−η)2
(1 +η)(2(1 +η)−η(1−η)2).
These bound functions are used to obtain semi-local convergence results for the Newton-HSS method.
In Fig. 1, we can see the graphs of the four bound functions g1, g2 and ¯g3. Clearly our bound function ¯g3
improves all the earlier results. Moreover, as noted before function g3 cannot be used, since it is incorrect bound function.
Fig. 1.Graphs of g1(t) (Yellow),g2(t) (Green) and ¯g3(Red). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
4. Conclusion
The corrected semi-local convergence for the Newton-HSS method is presented which guarantees the convergence of sequence{xk}generated by Algorithm NHSS to a solution of Eq.(1.1). Initial point is chosen correctly now by combining Algorithm NHSSB with the Newton-HSS method. Numerical examples have been used to solve convection–diffusion equations in [6], where the superiority of these results over related inner iteration methods is shown for the least run time.
Acknowledgments
This research was supported by the grant SENECA, Spain 20928/PI/18 and by Ministerio de Ciencia y Tecnolog´ıa, Spain PGC2018-095896-B-C21.
References
[1] I.K. Argyros, F. Szidarovszky, The Theory and Applications of Iteration Methods, CRC Press, Boca Raton, Florida, USA, 1993.
[2] I.K. Argyros, ´A.A. Magr´e˜nan, A Contemporary Study of Iterative Methods, Elsevier (Academic Press), New York, 2018.
[3] Z.Z. Bai, G.H. Golub, M.K. Ng, Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems, SIAM J. Matrix Anal. Appl. 24 (2003) 603–626.
[4] Z.Z. Bai, X.P. Guo, The Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices, J. Comput. Math. 28 (2010) 235–260.
[5] I.K. Argyros, Local convergence of inexact Newton-like-iterative methods and applications, Comput. Math. Appl. 39 (2000) 69–75.
[6] X.P. Guo, I.S. Duff, Semi-local and global convergence of the Newton-HSS method for systems of nonlinear equations, Numer. Linear Algebra Appl. 18 (2011) 299–315.
[7] W.P. Shen, C. Li, Kantorovich-type convergence criterion for inexact Newton methods, Appl. Numer. Math. 59 (2009) 1599–1611.
[8] Z.-Z. Bai, A class of two-stage iterative methods for systems of weakly nonlinear equations, Numer. Algorithms 14 (1997) 295–319.
[9] Z.-Z. Bai, V. Migall´on, J. Penad´es, D.B. Szyld, Block and asynchronous two-stage methods for mildly nonlinear systems, Numerische. Math. 82 (1997) 1–20.
[10] Z.-Z. Bai, S.-L. Zhang, A regularized conjugate gradient method for symmetric positive definite system of linear equations, J. Comput. Math. 20 (2002) 437–448.
[11] Z.-Z. Bai, X. Yang, On HSS-based iteration methods for weakly nonlinear systems, Appl. Numer. Math. 59 (2009) 2923–2936.
[12] R.S. Dembo, S.C. Eisenstat, T. Steihaug, Inexact Newton methods, SIAM J. Numer. Anal. 19 (1982) 400–408.
[13] S.C. Eisenstat, H.F. Walker, Choosing the forcing terms in an inexact Newton method, SIAM J. Sci. Comput. 17 (1996) 16–32.
[14] X.P. Guo, I.S. Duff, Semilocal and global convergence of the Newton-HSS method for systems of nonlinear equations, Linear Algebra Appl. 18 (2010) 299–315.
[15] J.M. Ortega, W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.
[16] Z.-Z. Bai, G.H. Golub, J.-Y. Pan, Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems, Numerische. Math. 98 (2004) 1–32.
[17] L. Li, T.Z. Huang, X.P. Liu, Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems, Comput. Math. Appl. 54 (2007) 147–159.
[18] L. Li, T.Z. Huang, X.P. Liu, Modified Hermitian and skew-Hermitian splitting methods for non-Hermitian positive- definite linear systems, Numer. Linear Algebra Appl. 14 (2007) 217–235.
[19] C.X. Li, S.L. Wu, A modified GHSS method for non-Hermitian positive definite linear systems, Japan J. Indust. Appl.
Math. 29 (2012) 253–268.
[20] Z.-Z. Bai, X.P. Guo, The Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices, J. Comput. Math. 28 (2010) 235–260.
[21] M.Z. Zhu, Modified iteration based on the asymmetric HSS for weakly nonlinear systems, J. Comput. Anal. Appl. 15 (2013) 188–195.
[22] X.P. Guo, On semilocal convergence of inexact Newton methods, J. Comput. Math. 25 (2007) 231–242.