ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v21i3.21685 ο² 545
A generalized Dai-Liao type CG-method with a new monotone line search for unconstrained optimization
Rana Z. Al-Kawaz1, Abbas Y. Al-Bayati2
1Department of Mathematics, College of Basic Education, University of Telafer, Tall βAfar, Mosul, Iraq
2College of Basic Education, University of Telafer, Tall βAfar, Mosul, Iraq
Article Info ABSTRACT
Article history:
Received Sep 07, 2022 Revised Oct 27, 2022 Accepted Nov 12, 2022
In this paper, we presented two developments, the first deals with generalizing the modernization of the damped Dai-Liao formula in an optimized manner. The second development is by suggesting a monotone line search formula for our new algorithms. These new algorithms with the new monotone line search yielded good numerical results when compared to the standard Dai-Liao (DL) and minimum description length (MDL) algorithms. Through several theorems, the new algorithms proved to have a faster convergence to reach the optimum point. These comparisons are drawn in the results section for the tools (Iter, Eval-F, Time) which are a comprehensive measure of the efficiency of the new algorithms with the basic algorithms that we used in the paper.
Keywords:
Dai-Liao CG-method Global convergence property Large-scale
Monotone line search
Optimum point This is an open access article under the CC BY-SA license.
Corresponding Author:
Rana Z. Al-Kawaz
Department of Mathematics, College of Basic Education, University of Telafer Tall βAfar, Mosul, Iraq
Email: [email protected]
1. INTRODUCTION
Let us define the function (1):
πΉ: πΊ β βπβ βπ (1)
Then the issue that we discuss in this article is (2):
πΉ(π₯) = 0, π₯ β πΊ (2)
When the solution vector π₯ β βπ and the function πΉ is continuous and satisfy the monotonous inequality i.e.,
[πΉ(π₯) β πΉ(π¦)]π(π₯ β π¦) β₯ 0 (3)
There are several ways to solve this problem, such as the Newton method, and the procedure of conjugating gradients [1], [2]. Applications around this type of method have continued to this day [3]-[5]. Here, we talk about monotonous equations and their solving methods for their importance in many practical applications. For more details see [6]. The projection technique is one of the important methods used to find the solution to (1). The two researchers Solodov and Svaiter [7] gave their attention to the large-scale non-linear equations. Recently, many researchers implemented new articles on the topic of finding solutions to some constraints or unconstraint monotony equations. in various methods, so it had an aspect of interest as in [8]-[15].
The projection technique relies on being accelerated using a monotone case πΉ and updating the new point using repetition:
π§π = π₯π+ πΌπππ (4) As an initial iterative and the hyperplane is:
π»π= {π₯ β βπ|πΉ(π§π)π(π₯ β π§π) = 0} (5)
To start using the projection technique, we use the update of the new point π₯π+1 as given in the [6]
to be the projection of π₯π onto the hyperplane π»π. So, can be evaluated (6).
π₯π+1 = ππΊ[π₯πβ πππΉ(π§π)] and ππ=πΉ(π§π)π(π₯πβπ§π)
βπΉ(π§π)β2 (6)
Specifically, this document is laid out: specifically, we present the suggested generalized damped Dai-Liao (GDDL) method in section 2. A novel monotone line search was proposed, and the penalty of generalized Dai-Liao (DL) parameters was determined in section 3. Section 4 proves global convergence.
In section 5 we provide the results of our numerical experiments.
2. GENERALIZED DAMPED DAI-LIAO
βThe thinking of many researchers was concerned with finding a suitable parameter for the method of the conjugate gradient (CG) and imposing conditions on it to make the search direction conjugate and reach the smallest point of the function faster and limited steps. One of these researchers was Dai-Liao who provided an appropriate parameter that always makes the direction of the search in a state accompanied by a parameter π β (0,1), for more details see [16].
π½ππ·πΏ =ππ+1π π¦π
ππππ¦π β πππ+1π π π
ππππ¦π (7)
Later Abubakar et al. [17] modified (6) by using the new formula of π‘ and the projection technique in their formula. Fatemi [18] presented a generalized method for the Dai-Liao parameter and its derivation by applying the penalty function to the parameter to achieve a sufficient descent condition in the search direction.
In this section we will rely on the same previous techniques with the addition of a damped quasi-Newton condition as in the following derivation:
π(π) = ππ+1+ ππ+1π π +1
2πππ΅π+1π (8)
With π»π(πΌπ+1ππ+1), the gradient of the model in π₯π+2, as an estimation of ππ+2. It is easy to see that:
π»π(πΌπ+1ππ+1) = ππ+1+ πΌπ+1π΅π+1π (9)
Unfortunately, πΌπ+1 in (4) is not available in the current iteration, because ππ+1 is unknown. Thus, we modified (4) and set.
ππ+2= ππ+1+ π‘ π΅π+1π (10)
Where π‘ > 0 is a suitable approximation of πΌπ+1 . If the search direction of the CG-method.
ππ+1= βππ+1+ π½πππ (11)
In an efficient nonlinear CG method, we introduce the following optimization problem based on the penalty function:
ππππ½π [ππ+1π ππ+1+ π βππ=1[(ππ+2π π πβπ)2+ (ππ+1π π¦πβπ)2]] (12) If π = 1, 2, 3, 4, 5, . . , π. Now, substituting (10), (11) in (12), and using the projection technique we obtain:
ππππ½π [ββπΉπ+1β2+ π½πππ+1π ππ+ π β [(πΉπ+1π π πβπ)2+ 2π‘πΉπ+1π π πβπ ππ+1π
π΅π+1π πβπ+ π‘2(ππ+1π
π΅π+1π πβπ)2+
π π=1
(πΉπ+1π π¦πβπ)2β 2π½ππΉπ+1π π¦πβπππππ¦πβπ+ (π½πππππ¦πβπ)2]] (13)
After some algebraic abbreviations, we get the:
π½π=1
π[βπΉπ+1π ππ+ 2π βππ=1πΉπ+1π π¦πβπππππ¦πβπ+ 2π‘2π βππ=1πΉπ+1π π΅π+1π πβπππππ΅π+1π πβπβ
2π‘π βππ=1πΉπ+1π π πβπππππ΅π+1π πβπ] (14)
Where:
π = 2π‘2π βππ=1(ππππ΅π+1π πβπ)2+ 2π βππ=1(ππππ¦πβπ)2
To get a new parameter, let us assume that the Hessian approximation π΅π+1 to satisfy the extended damped quasi-Newton (QN) equation and with the incorporation of the use of projection technology we get:
π΅π+1π πβπ = 1
ππ(πππ¦πβπ+ (1 β ππ)π΅ππ πβπ) = 1
πππ¦πβππ· = π¦Μ πβππ· (15)
And ππ is the projection step.
ππ = {
1 ππ π πβππ π¦πβπ β₯ π πβππ π΅ππ πβπ π π πβππ π΅ππ πβπ
π πβππ π΅ππ πβπβπ πβππ π¦πβπ ππ π πβππ π¦πβπ < π πβππ π΅ππ πβπ
(16)
Then,
π½ππππ€= βπΉπ+1π ππ
2π βππ=1(π‘2(ππππ¦Μ πβππ· )2+(ππππ¦πβπ)2)+ βππ=1πΉπ+1π π¦πβπππππ¦πβπ
βππ=1(π‘2(ππππ¦Μ πβππ· )2+(ππππ¦πβπ)2)+ π‘2βππ=1πΉπ+1π π¦Μ πβππ· ππππ¦Μ πβππ·
βππ=1(π‘2(ππππ¦Μ πβππ· )2+(ππππ¦πβπ)2)β
π‘ βππ=1πΉπ+1π π πβπππππ¦Μ πβππ·
βππ=1(π‘2(ππππ¦Μ πβππ· )2+(ππππ¦πβπ)2) (17)
So, there are two possible scenarios for this parameter such that, case I: if π πβππ π¦πβπβ₯ π πβππ π΅ππ πβπ then ππ= 1 and π¦Μ πβππ· =π¦πβπ
ππ .
π½ππππ€1= βπΉπ+1π ππ
2π β (π‘2 ππ2(ππππ¦πβπ)
2
+(ππππ¦πβπ)2) ππ=1
+ βππ=1πΉπ+1π π¦πβπππππ¦πβπ
β (π‘2 ππ2(ππππ¦πβπ)
2
+(ππππ¦πβπ)2) ππ=1
+
π‘2
ππ2βππ=1πΉπ+1π π¦πβπππππ¦πβπ
β (π‘2 ππ2(ππππ¦πβπ)
2
+(ππππ¦πβπ)2) ππ=1
β
π‘
ππβππ=1πΉπ+1π π πβπππππ¦πβπ
β (π‘2 ππ2(ππππ¦πβπ)
2
+(ππππ¦πβπ)2) ππ=1
(18)
Put π π= ππππ.
π½ππππ€1=βππ=1πΉπ+1π π¦πβπππππ¦πβπ
βππ=1(ππππ¦πβπ)2 β(π‘π2ππ‘
+1)
βππ=1πΉπ+1π π πβπππππ¦πβπ
βππ=1(ππππ¦πβπ)2 β πππΉπ+1π π π
2π1(π‘2+1) βππ=1(ππππ¦πβπ)2 (19a) To investigate the proposed method when π1 approaches infinity, because by making this coefficient larger, we penalize the conjugacy condition and the orthogonality property violations more severely, thereby forcing the minimizer of (10) closer to that of the linear CG method.
π½ππππ€1=βππ=1πΉπ+1π π¦πβπππππ¦πβπ
βππ=1(ππππ¦πβπ)2 β πππ‘
(π‘2+1)
βππ=1πΉπ+1π π πβπππππ¦πβπ
βππ=1(ππππ¦πβπ)2 (19b)
When compared to (6), we notice that the value is:
ππ= πππ‘
(π‘2+1) , π β (0,1
2) (20)
Case II: if π πβππ π¦πβπ < π πβππ π΅ππ πβπ then π¦Μ πβππ· =π¦πβπ
π·
ππ from (12) and π π = ππππ by projection, the technique to convert the π½ππππ€ form to:
π½ππππ€2= [ π‘2βππ=1πΉπ+1π π¦πβππ· ππππ¦πβππ·
βππ=1(π‘2(ππππ¦πβππ· )2+ππ2(ππππ¦πβπ)2)β π‘ππ βππ=1πΉπ+1π π πβπππππ¦πβππ·
βππ=1(π‘2(ππππ¦πβππ· )2+ππ2(ππππ¦πβπ)2)] + [ βππ=1πΉπ+1π π¦πβπππππ¦πβπ
βππ=1(π‘2(ππππ¦πβππ· )2+ππ2(ππππ¦πβπ)2)β
ππ πΉπ+1π π π
2π2βππ=1(π‘2(ππππ¦πβππ· )2+ππ2(ππππ¦πβπ)2)] (21) Now using algebraic simplifications, we obtain:
π½ππππ€2= 1
ππ(βππ=1[π‘1πΉπ+1π π¦πβπβ π‘2πΉπ+1π π πβπ]+ β ππππ πβπ
ππππ¦πβπ π
π=1 βππ=1[π‘3πΉπ+1π π¦πβπβ π‘4πΉπ+1π π πβπ]β
ππ
2π2βππ=1ππππ¦πβππΉπ+1π π π) (22a)
i.e.,
ππ= βππ=1(π‘2(ππππ¦πβππ· )2+ ππ2(ππππ¦πβπ)2) π‘1= π‘2ππ2+ ππ2 and π‘2= π‘ ππππβ π‘2ππ(1 β ππ)
π‘3= π‘2ππ(1 β ππ) and π‘4= π‘ ππ(1 β ππ) β π‘2(1 β ππ)2)
As we talked about (π1) then (π2) when you come close to infinity, then we use the parameter omitted from this limit:
π½ππππ€2= 1
ππ(βππ=1[π‘1πΉπ+1π π¦πβπβ π‘2πΉπ+1π π πβπ]+ β ππππ πβπ
ππππ¦πβπ
ππ=1 βππ=1[π‘3πΉπ+1π π¦πβπβ π‘4πΉπ+1π π πβπ])(22b)
To obtain better results as in section 5. We have another paper on this type of method, but without generalizing the original formula [17].β
3. NEW PENALTY PARAMETER
βDue to the importance of concomitant gradient methods in this paper, we highlight them in the derivation of new algorithms. Now, we will derive more coefficients of the penalty function. Although, in this formula, we will focus on the two new parameters defined in the (19) and (22) respectively, then we will check and update the derived parameters by relying on the appropriate direction of regression of the CG method:
3.1. Lemma 1
Assume that the sequence of the solution generated by the method (19) with monotone line search, then for a few positive scalars πΏ1 and πΏ2 satisfying πΏ1+ πΏ2< 1, we have:
πΉπ+1π ππ+1β€ β(1 β πΏ1β πΏ2)βπΉπ+1β2 (23)
When:
|πππ‘ β 1| β€ β 2 πΏ2(π¦πππ π)
βπ πβ βππ=1βπ πβπβ (24)
π1= 2 πΏ1βπΉπ+1β2
π(π‘2+1) πππ₯
π=1,..,π((π¦πβπβ0.5 πππ πβπ)ππΉπ+1)2;ππβ€ 1 (25) Proof: if we used (5) and (14) then:
πΉπ+1π ππ+1= ββπΉπ+1β2+ 1
βππ=1(π¦πβππ π π)2((πΉπ+1π π¦π)(π¦πππ π)(πΉπ+1π π π) β πππ‘
(π‘2+1)(πΉπ+1π π π)(π¦πππ π)(πΉπ+1π π π)) β
ππ 2π1(π‘2+1)
(πΉπ+1π π π)2
βππ=1(π¦πβππ π π)2 (26)
Since ππ β€ 1, implies that:
πΉπ+1π ππ+1β€ ββπΉπ+1β2+ 1
βππ=1(π¦πβππ π π)2βππ=1(((π¦πβπβ 0.5πππ πβπ)ππΉπ+1)(π¦πβππ π π)(πΉπ+1π π π))+ (1
2β πππ‘
(π‘2+1)) 1
βππ=1(π¦πβππ π π)2βππ=1(πΉπ+1π π πβπ)(π¦πβππ π π)(πΉπ+1π π π)β ππ
2π1(π‘2+1)
(πΉπ+1π π π)2
βππ=1(π¦πβππ π π)2 (27) Using this inequality π₯π¦ β€π‘β²
4π₯2+1
π‘β²π¦2, where π₯, π¦ and π‘β² are positive scalars, we have:
πΉπ+1π ππ+1β€ ββπΉπ+1β2+ π‘β²
4 βππ=1(π¦πβππ π π)2βππ=1((π¦πβπβ 0.5πππ πβπ)ππΉπ+1)2(π¦πβππ π π)2+
π
βππ=1(π¦πβππ π π)2π‘β²(πΉπ+1π π π)2β ππ
2π1βππ=1(π¦πβππ π π)2(π‘2+1)(πΉπ+1π π π)2+ (πππ‘β1)2
2π¦πππ π(π‘2+1)βππ=1|π πβππ πΉπ+1||πΉπ+1π π π| (28) Let π‘β²= 2ππ1(π‘2+ 1) ,
πΉπ+1π ππ+1β€ ββππ+1β2+π1π(π‘2+1)
2 πππ₯
π=1,β¦,π((π¦πβπβ 0.5πππ πβπ)ππΉπ+1)2+
(πππ‘β1)2
2π¦πππ π(π‘2+1)βππ=1|π πβππ πΉπ+1||πΉπ+1π π π| (29)
By Cauchy-Schwarz inequality implies:
πΉπ+1π ππ+1β€ β [1 βπ π1(π‘2+1)
2βπΉπ+1β2 πππ₯
π=1,β¦,π((π¦πβπβ 0.5πππ πβπ)ππΉπ+1)2β
(πππ‘β1)2
2π¦πππ π(π‘2+1)βπ πβ βππ=1βπ πβπβ] βπΉπ+1β2 (30)
πΉπ+1π ππ+1β€ β(1 β πΏ1β πΏ2) β€ βπ1βπΉπ+1β2 (31) Since π‘ is an approximation of the step size, we use the updated formula:
π‘ = {
ππ ππ |πππ‘ β 1| β€ β 2 πΏ2(π¦πππ π)
βπ πβ βππ=1βπ πβπβ
1 + β 2 πΏ2(π¦πππ π)
βπ πβ βππ=1βπ πβπβ π. π.
(32)
Hence, the proof is completed.
3.2. Lemma 2
Assume that the solution sequence is generated by the new method (16) with a monotone line search, then for a few positive scalars πΏ3, πΏ4, πΏ5 and πΏ6 satisfying πΏ3+ πΏ4+ πΏ5+ πΏ6< 1, we have:
πΉπ+1π ππ+1β€ β(1 β πΏ3β πΏ4β πΏ5β πΏ6)βπΉπ+1β2 (33)
|π‘2β 2| β€ β 2 πΏ4ππ2(π¦πππ π)
βπ πβ βππ=1βπ πβπβ & |π‘4β 2| β€ β2 πΏ6π‘2(1 β ππ)2 (34) And
π2π= 2 πΏ3βπΉπ+1β2
π max
π=1,..,π((π‘1 π¦πβπβ0.5 πππ πβπ)ππΉπ+1)2 and π2π= 2 πΏ5βπΉπ+1β2
π max
π=1,..,π((π‘3 π¦πβπβ0.5 πππ πβπ)ππΉπ+1)2 (35) ππβ€ 1 is a scalar.
Proof: we substituting (16) in (9) and multiplying by πΉπ+1 that:
πΉπ+1π ππ+1= ββπΉπ+1β2+ 1
ππ(βππ=1[π‘1(πΉπ+1π π¦πβπ)(π¦πβππ π π)(πΉπ+1π π π) β π‘2(πΉπ+1π π πβπ)(π¦πβππ π π)(πΉπ+1π π π)]+
β ππππ πβπ
ππππ¦πβπ
ππ=1 βππ=1[π‘3(πΉπ+1π π¦πβπ)(π¦πβππ π π)(πΉπ+1π π π) β (πΉπ+1π π πβπ)(π¦πβππ π π)(πΉπ+1π π π)]) β ππ
2π2ππ(πΉπ+1π π π)2 (36)
πΉπ+1π ππ+1= ββπΉπ+1β2+ 1
ππβππ=1((π‘1π¦πβπβ 0.5πππ πβπ)ππΉπ+1)(π¦πβππ π π)(πΉπ+1π π π)+ 1
ππ[1
2β π‘2] βππ=1(πΉπ+1π π πβπ)(π¦πβππ π π)(πΉπ+1π π π)+ 1
ππβππ=1((π‘3π¦πβπβ 0.5πππ πβπ)ππΉπ+1)(π πππ π)(πΉπ+1π π π)+
1 ππ[1
2β π‘4] βππ=1(πΉπ+1π π πβπ)(π πππ π)(πΉπ+1π π π)β ππ
2π2ππ(πΉπ+1π π π)2 (37)
By following the same steps in lemma 3.1 we get:
πΉπ+1π ππ+1β€ ββπΉπ+1β2+ π2πβππ=1((π‘1π¦πβπβ 0.5πππ πβπ)ππΉπ+1)2+ (π‘2β2)2
2ππ2π¦πππ πβππ=1|π πβππ πΉπ+1||πΉπ+1π π π| + π2πβππ=1((π‘3π¦πβπβ 0.5πππ πβπ)ππΉπ+1)2+ (π‘4β2)2
2π‘2(1βππ)2π πππ πβππ=1|π πβππ πΉπ+1||πΉπ+1π π π| (38) By Cauchy-Schwarz inequality implies:
πΉπ+1π ππ+1β€ β [
1 β ππ2π max
π=1,β¦,π(π‘1π¦πβπβ 0.5πππ πβπ)2β (π‘2β2)2
2ππ2π¦πππ πβπ πβ βππ=1βπ πβπββ ππ2π max
π=1,β¦,π(π‘3π¦πβπβ 0.5πππ πβπ)2β (π‘4β2)2
2π‘2(1βππ)2π πππ πβπ πβ βππ=1βπ πβπβ] βπΉπ+1β2 (39) Then, πΉπ+1π ππ+1β€ β(1 β πΏ3β πΏ4β πΏ5β πΏ6)βπΉπ+1β2β€ βπ2βπΉπ+1β2
The proof is completed. In the next paragraph we talk about the Algorithm 1 (minimum description length conjugate gradient (MDL-CG)) used for numerical comparison which is an update of the Dai-Liao algorithm:
Algorithm 1. MDL-CG [19]
Given π₯0β Ξ©, π, π β (0,1), stop test π > 0, set π = 0.
Step 1: evaluate πΉ(π₯π) and test if βπΉ(π₯π)β β€ π stop else goes to step 2.
Step 2: evaluate ππ by (9) and substitute π½πππ·πΏ=πΉπ+1π π¦π
π¦πππ π β πππΉπ+1π π π
π¦πππ π and ππ = πβπ¦πβ2
π πππ¦π β qπ πππ¦π
βπ πβ2. ππ= 0, is the stopping criterion.
Step 3: compute π§π= π₯π+ πΌπππ, with the step-size πΌπ= πππ .
βπΉ(π₯π+ ππππ)πππ > πππβπΉ(π₯π+ ππππ)ββππβ2 (40) Step 4: check if π§πβ Ξ© and βπΉ(π§π)β β€ π stop.
Step 5: let π = π + 1 and go to step 1.
The new Algorithm 2 (GDDL-CG) is a generalization according to the value of m and it is an update and suppression of the Dai- Laio algorithm as in the steps:
Algorithm 2. GDDL-CG
Given π₯0β Ξ©, r, π, π, πΎ β (0,1), stop test π > 0, set π = 0.
Step 1: evaluate πΉ(π₯π) and test if βπΉ(π₯π)β β€ π stop else goes to step 2.
Step 2: when π¦πππ πβ₯ π πππ π compute π1 from (25) and if π1β β then π½ππππ€1 from (19a) else (19b).
Step 3: when π¦πππ π< π πππ π compute π2 from (35) and if π2β β then π½ππππ€2 from (22a) else (22b).
Step 4: compute ππ by (9) and stop if ππ = 0.
Step 5: set π§π= π₯π+ πΌπππ, where πΌπ= ππ with π to be the shortest positive number π so:
βπΉ(π₯π+ πΎπππππ)πππ > ππΎπππ[πβππβ2+ (1 β π)βππβ2
βπΉπβ2] (41)
Step 6: if π§π β πΊ and βπΉ(π§π)β β€ π stop, else compute the point π₯π+1 from (6).
Step 7: let π = π + 1 and go to step 1.β
4. GLOBAL CONVERGENCE
In the previous section, we gave a preface to the proof of convergence condition by establishing the property of sufficient descent through lemmas 3.1 and 3.2. In the beginning, there are a set of assumptions that we mention in this section, and then we move on to theories. Adding several lemmas for the step length of the new algorithm. Now we need some assumption, to begin with, the proof of convergence condition, which is illustrated thus:
4.1. Assumption
Suppose πΉ fulfills the following assumptions:
a) The solution group of (2) is non-empty.
b) The function πΉ is Lipschitz continuous, i.e.,
βπΉ(π₯) β πΉ(π¦)β β€ πΏβπ₯ β π¦β , β π₯, π¦ β βπ (42)
c) πΉ satisfies,
β©πΉ(π₯) β πΉ(π¦), π₯ β π¦βͺ β₯ πβπ₯ β π¦β2 , β π₯, π¦ β βπ, c>0 (43) 4.2. Lemma 1
Assume (π₯Μ β βπ) satisfy πΉ(π₯Μ ) = 0 and {π₯} is generated by the new algorithm GDDL-CG.
If lemmas 3.1 and 3.2, hold, then βπ₯π+1β π₯Μ β2β€ βπ₯πβ π₯Μ β2β βπ₯π+1β π₯πβ2 , and
ββπ=0βπ₯π+1β π₯πβ< β (44)
4.3. Lemma 2
Suppose {π₯} is generated by the new algorithm GDDL-CG then:
πββlimπΌπβππβ = 0 (45)
Proof: since the sequence {βπ₯πβ π₯Μ β} is not increasing; {π₯π} is bounded, and πππ
πβββπ₯π+1β π₯πβ = 0. From (3) using a line search, we have:
βπ₯π+1β π₯πβ =|πΉ(π§π)π(π₯βπ§π)|
βπΉ(π§π)β2 βπΉ(π§π)β =|πΌππΉ(π§π)πππ|
βπΉ(π§π)β β₯ πΌπβππβ β₯ 0 (46) Then the proof is completed.β
4.4. Theorem
Let {π₯π} and {π§π} be the sequences generated by the new algorithm GDDL-CG then:
πππ πππ
πββ
βπΉ(π₯π)β = 0 (47)
Proof: case I: if πππ πππ βππβ
πββ
= 0, then πππ πππ
πββ
βπΉ(π₯π)β = 0. The sequence {π₯π} has some accumulation point π₯Μ such that πΉ(π₯Μ ) = 0. Hence, {βπ₯πβ π₯Μ β} converges to π₯Μ . Case II: if πππ πππ βππβ
πββ
> 0, then πππ πππ
πββ
βπΉ(π₯π)β > 0. Hence πππ
πββπΌπ= 0. Using the line search.
βπΉ(π₯π+ πΎπππππ)πππ> ππΎπππ[πβππβ2+ (1 β π)βππβ2
βπΉπβ2] (48)
And the boundedness of {π₯π}, {ππ}, yields
βπΉ(π₯Μ )ππΜ β€ 0 (49)
From (17) and (23) we get:
βπΉ(π₯Μ )ππΜ β₯ ππ βπΉ(π₯Μ )β2> 0 (50)
The (31) and (32) indicates a contradiction for π = 1, 2. So, πππ πππ
πββ
βπΉ(π₯π)β > 0 does not hold and the proof is complete.
5. NUMERICAL PERFORMANCE
In this section, we present our numerical results that explain the importance of the new algorithm GDDL-CG compared to the MDL-CG algorithm [19] using Matlab R2018b program in a laptop calculator with its CoreTMi5 specifications. The program finds the results on several non-derivative functions through several two initial points. In Table 1, we review the details of the initial points used to compare algorithms as shown in Table 1.