• Tidak ada hasil yang ditemukan

A generalized Dai-Liao type CG-method with a new monotone line search for unconstrained optimization

N/A
N/A
Protected

Academic year: 2023

Membagikan "A generalized Dai-Liao type CG-method with a new monotone line search for unconstrained optimization"

Copied!
11
0
0

Teks penuh

(1)

ISSN: 1693-6930, DOI: 10.12928/TELKOMNIKA.v21i3.21685  545

A generalized Dai-Liao type CG-method with a new monotone line search for unconstrained optimization

Rana Z. Al-Kawaz1, Abbas Y. Al-Bayati2

1Department of Mathematics, College of Basic Education, University of Telafer, Tall β€˜Afar, Mosul, Iraq

2College of Basic Education, University of Telafer, Tall β€˜Afar, Mosul, Iraq

Article Info ABSTRACT

Article history:

Received Sep 07, 2022 Revised Oct 27, 2022 Accepted Nov 12, 2022

In this paper, we presented two developments, the first deals with generalizing the modernization of the damped Dai-Liao formula in an optimized manner. The second development is by suggesting a monotone line search formula for our new algorithms. These new algorithms with the new monotone line search yielded good numerical results when compared to the standard Dai-Liao (DL) and minimum description length (MDL) algorithms. Through several theorems, the new algorithms proved to have a faster convergence to reach the optimum point. These comparisons are drawn in the results section for the tools (Iter, Eval-F, Time) which are a comprehensive measure of the efficiency of the new algorithms with the basic algorithms that we used in the paper.

Keywords:

Dai-Liao CG-method Global convergence property Large-scale

Monotone line search

Optimum point This is an open access article under the CC BY-SA license.

Corresponding Author:

Rana Z. Al-Kawaz

Department of Mathematics, College of Basic Education, University of Telafer Tall β€˜Afar, Mosul, Iraq

Email: [email protected]

1. INTRODUCTION

Let us define the function (1):

𝐹: 𝛺 βŠ‚ ℝ𝑛→ ℝ𝑛 (1)

Then the issue that we discuss in this article is (2):

𝐹(π‘₯) = 0, π‘₯ ∈ 𝛺 (2)

When the solution vector π‘₯ ∈ ℝ𝑛 and the function 𝐹 is continuous and satisfy the monotonous inequality i.e.,

[𝐹(π‘₯) βˆ’ 𝐹(𝑦)]𝑇(π‘₯ βˆ’ 𝑦) β‰₯ 0 (3)

There are several ways to solve this problem, such as the Newton method, and the procedure of conjugating gradients [1], [2]. Applications around this type of method have continued to this day [3]-[5]. Here, we talk about monotonous equations and their solving methods for their importance in many practical applications. For more details see [6]. The projection technique is one of the important methods used to find the solution to (1). The two researchers Solodov and Svaiter [7] gave their attention to the large-scale non-linear equations. Recently, many researchers implemented new articles on the topic of finding solutions to some constraints or unconstraint monotony equations. in various methods, so it had an aspect of interest as in [8]-[15].

The projection technique relies on being accelerated using a monotone case 𝐹 and updating the new point using repetition:

(2)

π‘§π‘˜ = π‘₯π‘˜+ π›Όπ‘˜π‘‘π‘˜ (4) As an initial iterative and the hyperplane is:

π»π‘˜= {π‘₯ ∈ ℝ𝑛|𝐹(π‘§π‘˜)𝑇(π‘₯ βˆ’ π‘§π‘˜) = 0} (5)

To start using the projection technique, we use the update of the new point π‘₯π‘˜+1 as given in the [6]

to be the projection of π‘₯π‘˜ onto the hyperplane π»π‘˜. So, can be evaluated (6).

π‘₯π‘˜+1 = 𝑃𝛺[π‘₯π‘˜βˆ’ πœ‰π‘˜πΉ(π‘§π‘˜)] and πœ‰π‘˜=𝐹(π‘§π‘˜)𝑇(π‘₯π‘˜βˆ’π‘§π‘˜)

‖𝐹(π‘§π‘˜)β€–2 (6)

Specifically, this document is laid out: specifically, we present the suggested generalized damped Dai-Liao (GDDL) method in section 2. A novel monotone line search was proposed, and the penalty of generalized Dai-Liao (DL) parameters was determined in section 3. Section 4 proves global convergence.

In section 5 we provide the results of our numerical experiments.

2. GENERALIZED DAMPED DAI-LIAO

β€œThe thinking of many researchers was concerned with finding a suitable parameter for the method of the conjugate gradient (CG) and imposing conditions on it to make the search direction conjugate and reach the smallest point of the function faster and limited steps. One of these researchers was Dai-Liao who provided an appropriate parameter that always makes the direction of the search in a state accompanied by a parameter 𝜐 ∈ (0,1), for more details see [16].

π›½π‘˜π·πΏ =π‘”π‘˜+1𝑇 π‘¦π‘˜

π‘‘π‘˜π‘‡π‘¦π‘˜ βˆ’ πœπ‘”π‘˜+1𝑇 π‘ π‘˜

π‘‘π‘˜π‘‡π‘¦π‘˜ (7)

Later Abubakar et al. [17] modified (6) by using the new formula of 𝑑 and the projection technique in their formula. Fatemi [18] presented a generalized method for the Dai-Liao parameter and its derivation by applying the penalty function to the parameter to achieve a sufficient descent condition in the search direction.

In this section we will rely on the same previous techniques with the addition of a damped quasi-Newton condition as in the following derivation:

π‘ž(𝑑) = π‘“π‘˜+1+ π‘”π‘˜+1𝑇 𝑑 +1

2π‘‘π‘‡π΅π‘˜+1𝑑 (8)

With π›»π‘ž(π›Όπ‘˜+1π‘‘π‘˜+1), the gradient of the model in π‘₯π‘˜+2, as an estimation of π‘”π‘˜+2. It is easy to see that:

π›»π‘ž(π›Όπ‘˜+1π‘‘π‘˜+1) = π‘”π‘˜+1+ π›Όπ‘˜+1π΅π‘˜+1𝑑 (9)

Unfortunately, π›Όπ‘˜+1 in (4) is not available in the current iteration, because π‘‘π‘˜+1 is unknown. Thus, we modified (4) and set.

π‘”π‘˜+2= π‘”π‘˜+1+ 𝑑 π΅π‘˜+1𝑑 (10)

Where 𝑑 > 0 is a suitable approximation of π›Όπ‘˜+1 . If the search direction of the CG-method.

π‘‘π‘˜+1= βˆ’π‘”π‘˜+1+ π›½π‘˜π‘‘π‘˜ (11)

In an efficient nonlinear CG method, we introduce the following optimization problem based on the penalty function:

π‘šπ‘–π‘›π›½π‘˜ [π‘”π‘˜+1𝑇 π‘‘π‘˜+1+ 𝑃 βˆ‘π‘šπ‘–=1[(π‘”π‘˜+2𝑇 π‘ π‘˜βˆ’π‘–)2+ (π‘‘π‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–)2]] (12) If π‘š = 1, 2, 3, 4, 5, . . , 𝑛. Now, substituting (10), (11) in (12), and using the projection technique we obtain:

π‘šπ‘–π‘›π›½π‘˜ [βˆ’β€–πΉπ‘˜+1β€–2+ π›½π‘˜π‘”π‘˜+1𝑇 π‘‘π‘˜+ 𝑃 βˆ‘ [(πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)2+ 2π‘‘πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘– π‘‘π‘˜+1𝑇

π΅π‘˜+1π‘ π‘˜βˆ’π‘–+ 𝑑2(π‘‘π‘˜+1𝑇

π΅π‘˜+1π‘ π‘˜βˆ’π‘–)2+

π‘š 𝑖=1

(πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–)2βˆ’ 2π›½π‘˜πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–+ (π›½π‘˜π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2]] (13)

(3)

After some algebraic abbreviations, we get the:

π›½π‘˜=1

πœ‘[βˆ’πΉπ‘˜+1𝑇 π‘‘π‘˜+ 2𝑃 βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–+ 2𝑑2𝑃 βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π΅π‘˜+1π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π΅π‘˜+1π‘ π‘˜βˆ’π‘–βˆ’

2𝑑𝑃 βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π΅π‘˜+1π‘ π‘˜βˆ’π‘–] (14)

Where:

πœ‘ = 2𝑑2𝑃 βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π΅π‘˜+1π‘ π‘˜βˆ’π‘–)2+ 2𝑃 βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2

To get a new parameter, let us assume that the Hessian approximation π΅π‘˜+1 to satisfy the extended damped quasi-Newton (QN) equation and with the incorporation of the use of projection technology we get:

π΅π‘˜+1π‘ π‘˜βˆ’π‘– = 1

πœ‰π‘˜(πœπ‘˜π‘¦π‘˜βˆ’π‘–+ (1 βˆ’ πœπ‘˜)π΅π‘˜π‘ π‘˜βˆ’π‘–) = 1

πœ‰π‘˜π‘¦π‘˜βˆ’π‘–π· = π‘¦Μ…π‘˜βˆ’π‘–π· (15)

And πœ‰π‘˜ is the projection step.

πœπ‘˜ = {

1 𝑖𝑓 π‘ π‘˜βˆ’π‘–π‘‡ π‘¦π‘˜βˆ’π‘– β‰₯ π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘– πœ‚ π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘–

π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘–βˆ’π‘ π‘˜βˆ’π‘–π‘‡ π‘¦π‘˜βˆ’π‘– 𝑖𝑓 π‘ π‘˜βˆ’π‘–π‘‡ π‘¦π‘˜βˆ’π‘– < π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘–

(16)

Then,

π›½π‘˜π‘›π‘’π‘€= βˆ’πΉπ‘˜+1𝑇 π‘‘π‘˜

2𝑃 βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π· )2+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)+ βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π· )2+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)+ 𝑑2βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦Μ…π‘˜βˆ’π‘–π· π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π·

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π· )2+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)βˆ’

𝑑 βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π·

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦Μ…π‘˜βˆ’π‘–π· )2+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) (17)

So, there are two possible scenarios for this parameter such that, case I: if π‘ π‘˜βˆ’π‘–π‘‡ π‘¦π‘˜βˆ’π‘–β‰₯ π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘– then πœπ‘˜= 1 and π‘¦Μ…π‘˜βˆ’π‘–π· =π‘¦π‘˜βˆ’π‘–

πœ‰π‘˜ .

π›½π‘˜π‘›π‘’π‘€1= βˆ’πΉπ‘˜+1𝑇 π‘‘π‘˜

2𝑃 βˆ‘ (𝑑2 πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)

2

+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) π‘šπ‘–=1

+ βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘ (𝑑2 πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)

2

+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) π‘šπ‘–=1

+

𝑑2

πœ‰π‘˜2βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘ (𝑑2 πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)

2

+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) π‘šπ‘–=1

βˆ’

𝑑

πœ‰π‘˜βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘ (𝑑2 πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)

2

+(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) π‘šπ‘–=1

(18)

Put π‘ π‘˜= πœ‰π‘˜π‘‘π‘˜.

π›½π‘˜π‘›π‘’π‘€1=βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2 βˆ’(π‘‘πœ‰2π‘˜π‘‘

+1)

βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2 βˆ’ πœ‰π‘˜πΉπ‘˜+1𝑇 π‘ π‘˜

2𝑃1(𝑑2+1) βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2 (19a) To investigate the proposed method when 𝑃1 approaches infinity, because by making this coefficient larger, we penalize the conjugacy condition and the orthogonality property violations more severely, thereby forcing the minimizer of (10) closer to that of the linear CG method.

π›½π‘˜π‘›π‘’π‘€1=βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2 βˆ’ πœ‰π‘˜π‘‘

(𝑑2+1)

βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2 (19b)

When compared to (6), we notice that the value is:

πœπ‘˜= πœ‰π‘˜π‘‘

(𝑑2+1) , 𝜐 ∈ (0,1

2) (20)

Case II: if π‘ π‘˜βˆ’π‘–π‘‡ π‘¦π‘˜βˆ’π‘– < π‘ π‘˜βˆ’π‘–π‘‡ π΅π‘˜π‘ π‘˜βˆ’π‘– then π‘¦Μ…π‘˜βˆ’π‘–π· =π‘¦π‘˜βˆ’π‘–

𝐷

πœ‰π‘˜ from (12) and π‘ π‘˜ = πœ‰π‘˜π‘‘π‘˜ by projection, the technique to convert the π›½π‘˜π‘›π‘’π‘€ form to:

(4)

π›½π‘˜π‘›π‘’π‘€2= [ 𝑑2βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π· π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π·

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π· )2+πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)βˆ’ π‘‘πœ‰π‘˜ βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π·

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π· )2+πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)] + [ βˆ‘π‘šπ‘–=1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π· )2+πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)βˆ’

πœ‰π‘˜ πΉπ‘˜+1𝑇 π‘ π‘˜

2𝑃2βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π· )2+πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2)] (21) Now using algebraic simplifications, we obtain:

π›½π‘˜π‘›π‘’π‘€2= 1

πœ‘π‘‘(βˆ‘π‘šπ‘–=1[𝑑1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–βˆ’ 𝑑2πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–]+ βˆ‘ π‘‘π‘˜π‘‡π‘ π‘˜βˆ’π‘–

π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘– π‘š

𝑖=1 βˆ‘π‘šπ‘–=1[𝑑3πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–βˆ’ 𝑑4πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–]βˆ’

πœ‰π‘˜

2𝑃2βˆ‘π‘šπ‘–=1π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–πΉπ‘˜+1𝑇 π‘ π‘˜) (22a)

i.e.,

πœ‘π‘‘= βˆ‘π‘šπ‘–=1(𝑑2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–π· )2+ πœ‰π‘˜2(π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–)2) 𝑑1= 𝑑2πœπ‘˜2+ πœ‰π‘˜2 and 𝑑2= 𝑑 πœ‰π‘˜πœπ‘˜βˆ’ 𝑑2πœπ‘˜(1 βˆ’ πœπ‘˜)

𝑑3= 𝑑2πœπ‘˜(1 βˆ’ πœπ‘˜) and 𝑑4= 𝑑 πœ‰π‘˜(1 βˆ’ πœπ‘˜) βˆ’ 𝑑2(1 βˆ’ πœπ‘˜)2)

As we talked about (𝑃1) then (𝑃2) when you come close to infinity, then we use the parameter omitted from this limit:

π›½π‘˜π‘›π‘’π‘€2= 1

πœ‘π‘‘(βˆ‘π‘šπ‘–=1[𝑑1πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–βˆ’ 𝑑2πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–]+ βˆ‘ π‘‘π‘˜π‘‡π‘ π‘˜βˆ’π‘–

π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

π‘šπ‘–=1 βˆ‘π‘šπ‘–=1[𝑑3πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–βˆ’ 𝑑4πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–])(22b)

To obtain better results as in section 5. We have another paper on this type of method, but without generalizing the original formula [17].”

3. NEW PENALTY PARAMETER

β€œDue to the importance of concomitant gradient methods in this paper, we highlight them in the derivation of new algorithms. Now, we will derive more coefficients of the penalty function. Although, in this formula, we will focus on the two new parameters defined in the (19) and (22) respectively, then we will check and update the derived parameters by relying on the appropriate direction of regression of the CG method:

3.1. Lemma 1

Assume that the sequence of the solution generated by the method (19) with monotone line search, then for a few positive scalars 𝛿1 and 𝛿2 satisfying 𝛿1+ 𝛿2< 1, we have:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’(1 βˆ’ 𝛿1βˆ’ 𝛿2)β€–πΉπ‘˜+1β€–2 (23)

When:

|πœ‰π‘˜π‘‘ βˆ’ 1| ≀ √ 2 𝛿2(π‘¦π‘˜π‘‡π‘ π‘˜)

β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€– (24)

𝑃1= 2 𝛿1β€–πΉπ‘˜+1β€–2

π‘š(𝑑2+1) π‘šπ‘Žπ‘₯

𝑖=1,..,π‘š((π‘¦π‘˜βˆ’π‘–βˆ’0.5 πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2;πœ†π‘–β‰€ 1 (25) Proof: if we used (5) and (14) then:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1= βˆ’β€–πΉπ‘˜+1β€–2+ 1

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2((πΉπ‘˜+1𝑇 π‘¦π‘˜)(π‘¦π‘˜π‘‡π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜) βˆ’ πœ‰π‘˜π‘‘

(𝑑2+1)(πΉπ‘˜+1𝑇 π‘ π‘˜)(π‘¦π‘˜π‘‡π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)) βˆ’

πœ‰π‘˜ 2𝑃1(𝑑2+1)

(πΉπ‘˜+1𝑇 π‘ π‘˜)2

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2 (26)

Since πœ†π‘˜ ≀ 1, implies that:

(5)

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’β€–πΉπ‘˜+1β€–2+ 1

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2βˆ‘π‘šπ‘–=1(((π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜))+ (1

2βˆ’ πœ‰π‘˜π‘‘

(𝑑2+1)) 1

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2βˆ‘π‘šπ‘–=1(πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)βˆ’ πœ‰π‘˜

2𝑃1(𝑑2+1)

(πΉπ‘˜+1𝑇 π‘ π‘˜)2

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2 (27) Using this inequality π‘₯𝑦 ≀𝑑′

4π‘₯2+1

𝑑′𝑦2, where π‘₯, 𝑦 and 𝑑′ are positive scalars, we have:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’β€–πΉπ‘˜+1β€–2+ 𝑑′

4 βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2βˆ‘π‘šπ‘–=1((π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2+

π‘š

βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2𝑑′(πΉπ‘˜+1𝑇 π‘ π‘˜)2βˆ’ πœ‰π‘˜

2𝑃1βˆ‘π‘šπ‘–=1(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)2(𝑑2+1)(πΉπ‘˜+1𝑇 π‘ π‘˜)2+ (πœ‰π‘˜π‘‘βˆ’1)2

2π‘¦π‘˜π‘‡π‘ π‘˜(𝑑2+1)βˆ‘π‘šπ‘–=1|π‘ π‘˜βˆ’π‘–π‘‡ πΉπ‘˜+1||πΉπ‘˜+1𝑇 π‘ π‘˜| (28) Let 𝑑′= 2π‘šπ‘ƒ1(𝑑2+ 1) ,

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’β€–π‘”π‘˜+1β€–2+𝑃1π‘š(𝑑2+1)

2 π‘šπ‘Žπ‘₯

𝑖=1,…,π‘š((π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2+

(πœ‰π‘˜π‘‘βˆ’1)2

2π‘¦π‘˜π‘‡π‘ π‘˜(𝑑2+1)βˆ‘π‘šπ‘–=1|π‘ π‘˜βˆ’π‘–π‘‡ πΉπ‘˜+1||πΉπ‘˜+1𝑇 π‘ π‘˜| (29)

By Cauchy-Schwarz inequality implies:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’ [1 βˆ’π‘š 𝑃1(𝑑2+1)

2β€–πΉπ‘˜+1β€–2 π‘šπ‘Žπ‘₯

𝑖=1,…,π‘š((π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2βˆ’

(πœ‰π‘˜π‘‘βˆ’1)2

2π‘¦π‘˜π‘‡π‘ π‘˜(𝑑2+1)β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€–] β€–πΉπ‘˜+1β€–2 (30)

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’(1 βˆ’ 𝛿1βˆ’ 𝛿2) ≀ βˆ’πœŒ1β€–πΉπ‘˜+1β€–2 (31) Since 𝑑 is an approximation of the step size, we use the updated formula:

𝑑 = {

πœ‰π‘˜ 𝑖𝑓 |πœ‰π‘˜π‘‘ βˆ’ 1| ≀ √ 2 𝛿2(π‘¦π‘˜π‘‡π‘ π‘˜)

β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€–

1 + √ 2 𝛿2(π‘¦π‘˜π‘‡π‘ π‘˜)

β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€– 𝑂. π‘Š.

(32)

Hence, the proof is completed.

3.2. Lemma 2

Assume that the solution sequence is generated by the new method (16) with a monotone line search, then for a few positive scalars 𝛿3, 𝛿4, 𝛿5 and 𝛿6 satisfying 𝛿3+ 𝛿4+ 𝛿5+ 𝛿6< 1, we have:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’(1 βˆ’ 𝛿3βˆ’ 𝛿4βˆ’ 𝛿5βˆ’ 𝛿6)β€–πΉπ‘˜+1β€–2 (33)

|𝑑2βˆ’ 2| ≀ √ 2 𝛿4πœ‰π‘˜2(π‘¦π‘˜π‘‡π‘ π‘˜)

β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€– & |𝑑4βˆ’ 2| ≀ √2 𝛿6𝑑2(1 βˆ’ πœπ‘˜)2 (34) And

𝑃2π‘Ž= 2 𝛿3β€–πΉπ‘˜+1β€–2

π‘š max

𝑖=1,..,π‘š((𝑑1 π‘¦π‘˜βˆ’π‘–βˆ’0.5 πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2 and 𝑃2𝑏= 2 𝛿5β€–πΉπ‘˜+1β€–2

π‘š max

𝑖=1,..,π‘š((𝑑3 π‘¦π‘˜βˆ’π‘–βˆ’0.5 πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2 (35) πœ†π‘–β‰€ 1 is a scalar.

Proof: we substituting (16) in (9) and multiplying by πΉπ‘˜+1 that:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1= βˆ’β€–πΉπ‘˜+1β€–2+ 1

πœ‘π‘‘(βˆ‘π‘šπ‘–=1[𝑑1(πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜) βˆ’ 𝑑2(πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)]+

βˆ‘ π‘‘π‘˜π‘‡π‘ π‘˜βˆ’π‘–

π‘‘π‘˜π‘‡π‘¦π‘˜βˆ’π‘–

π‘šπ‘–=1 βˆ‘π‘šπ‘–=1[𝑑3(πΉπ‘˜+1𝑇 π‘¦π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜) βˆ’ (πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)]) βˆ’ πœ‰π‘˜

2𝑃2πœ‘π‘‘(πΉπ‘˜+1𝑇 π‘ π‘˜)2 (36)

(6)

πΉπ‘˜+1𝑇 π‘‘π‘˜+1= βˆ’β€–πΉπ‘˜+1β€–2+ 1

πœ‘π‘‘βˆ‘π‘šπ‘–=1((𝑑1π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)+ 1

πœ‘π‘‘[1

2βˆ’ 𝑑2] βˆ‘π‘šπ‘–=1(πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)(π‘¦π‘˜βˆ’π‘–π‘‡ π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)+ 1

πœ‘π‘‘βˆ‘π‘šπ‘–=1((𝑑3π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)(π‘ π‘˜π‘‡π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)+

1 πœ‘π‘‘[1

2βˆ’ 𝑑4] βˆ‘π‘šπ‘–=1(πΉπ‘˜+1𝑇 π‘ π‘˜βˆ’π‘–)(π‘ π‘˜π‘‡π‘ π‘˜)(πΉπ‘˜+1𝑇 π‘ π‘˜)βˆ’ πœ‰π‘˜

2𝑃2πœ‘π‘‘(πΉπ‘˜+1𝑇 π‘ π‘˜)2 (37)

By following the same steps in lemma 3.1 we get:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’β€–πΉπ‘˜+1β€–2+ 𝑃2π‘Žβˆ‘π‘šπ‘–=1((𝑑1π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2+ (𝑑2βˆ’2)2

2πœ‰π‘˜2π‘¦π‘˜π‘‡π‘ π‘˜βˆ‘π‘šπ‘–=1|π‘ π‘˜βˆ’π‘–π‘‡ πΉπ‘˜+1||πΉπ‘˜+1𝑇 π‘ π‘˜| + 𝑃2π‘βˆ‘π‘šπ‘–=1((𝑑3π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)π‘‡πΉπ‘˜+1)2+ (𝑑4βˆ’2)2

2𝑑2(1βˆ’πœπ‘˜)2π‘ π‘˜π‘‡π‘ π‘˜βˆ‘π‘šπ‘–=1|π‘ π‘˜βˆ’π‘–π‘‡ πΉπ‘˜+1||πΉπ‘˜+1𝑇 π‘ π‘˜| (38) By Cauchy-Schwarz inequality implies:

πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’ [

1 βˆ’ π‘šπ‘ƒ2π‘Ž max

𝑖=1,…,π‘š(𝑑1π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)2βˆ’ (𝑑2βˆ’2)2

2πœ‰π‘˜2π‘¦π‘˜π‘‡π‘ π‘˜β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€–βˆ’ π‘šπ‘ƒ2𝑏 max

𝑖=1,…,π‘š(𝑑3π‘¦π‘˜βˆ’π‘–βˆ’ 0.5πœ†π‘–π‘ π‘˜βˆ’π‘–)2βˆ’ (𝑑4βˆ’2)2

2𝑑2(1βˆ’πœπ‘˜)2π‘ π‘˜π‘‡π‘ π‘˜β€–π‘ π‘˜β€– βˆ‘π‘šπ‘–=1β€–π‘ π‘˜βˆ’π‘–β€–] β€–πΉπ‘˜+1β€–2 (39) Then, πΉπ‘˜+1𝑇 π‘‘π‘˜+1≀ βˆ’(1 βˆ’ 𝛿3βˆ’ 𝛿4βˆ’ 𝛿5βˆ’ 𝛿6)β€–πΉπ‘˜+1β€–2≀ βˆ’πœŒ2β€–πΉπ‘˜+1β€–2

The proof is completed. In the next paragraph we talk about the Algorithm 1 (minimum description length conjugate gradient (MDL-CG)) used for numerical comparison which is an update of the Dai-Liao algorithm:

Algorithm 1. MDL-CG [19]

Given π‘₯0∈ Ξ©, π‘Ÿ, 𝜎 ∈ (0,1), stop test πœ– > 0, set π‘˜ = 0.

Step 1: evaluate 𝐹(π‘₯π‘˜) and test if ‖𝐹(π‘₯π‘˜)β€– ≀ πœ– stop else goes to step 2.

Step 2: evaluate π‘‘π‘˜ by (9) and substitute π›½π‘˜π‘€π·πΏ=πΉπ‘˜+1𝑇 π‘¦π‘˜

π‘¦π‘˜π‘‡π‘ π‘˜ βˆ’ πœˆπ‘˜πΉπ‘˜+1𝑇 π‘ π‘˜

π‘¦π‘˜π‘‡π‘ π‘˜ and πœˆπ‘˜ = π‘β€–π‘¦π‘˜β€–2

π‘ π‘˜π‘‡π‘¦π‘˜ βˆ’ qπ‘ π‘˜π‘‡π‘¦π‘˜

β€–π‘ π‘˜β€–2. π‘‘π‘˜= 0, is the stopping criterion.

Step 3: compute π‘§π‘˜= π‘₯π‘˜+ π›Όπ‘˜π‘‘π‘˜, with the step-size π›Όπ‘˜= π‘Ÿπ‘šπ‘˜ .

βˆ’πΉ(π‘₯π‘˜+ π‘Ÿπ‘šπ‘‘π‘˜)π‘‡π‘‘π‘˜ > πœŽπ‘Ÿπ‘šβ€–πΉ(π‘₯π‘˜+ π‘Ÿπ‘šπ‘‘π‘˜)β€–β€–π‘‘π‘˜β€–2 (40) Step 4: check if π‘§π‘˜βˆˆ Ξ© and ‖𝐹(π‘§π‘˜)β€– ≀ πœ– stop.

Step 5: let π‘˜ = π‘˜ + 1 and go to step 1.

The new Algorithm 2 (GDDL-CG) is a generalization according to the value of m and it is an update and suppression of the Dai- Laio algorithm as in the steps:

Algorithm 2. GDDL-CG

Given π‘₯0∈ Ξ©, r, 𝜎, πœ‡, 𝛾 ∈ (0,1), stop test πœ– > 0, set π‘˜ = 0.

Step 1: evaluate 𝐹(π‘₯π‘˜) and test if ‖𝐹(π‘₯π‘˜)β€– ≀ πœ– stop else goes to step 2.

Step 2: when π‘¦π‘˜π‘‡π‘ π‘˜β‰₯ π‘ π‘˜π‘‡π‘ π‘˜ compute 𝑃1 from (25) and if 𝑃1β‰  ∞ then π›½π‘˜π‘›π‘’π‘€1 from (19a) else (19b).

Step 3: when π‘¦π‘˜π‘‡π‘ π‘˜< π‘ π‘˜π‘‡π‘ π‘˜ compute 𝑃2 from (35) and if 𝑃2β‰  ∞ then π›½π‘˜π‘›π‘’π‘€2 from (22a) else (22b).

Step 4: compute π‘‘π‘˜ by (9) and stop if π‘‘π‘˜ = 0.

Step 5: set π‘§π‘˜= π‘₯π‘˜+ π›Όπ‘˜π‘‘π‘˜, where π›Όπ‘˜= π‘Ÿπ‘š with π‘š to be the shortest positive number π‘š so:

βˆ’πΉ(π‘₯π‘˜+ π›Ύπ‘Ÿπ‘šπ‘˜π‘‘π‘˜)π‘‡π‘‘π‘˜ > πœŽπ›Ύπ‘Ÿπ‘šπ‘˜[πœ‡β€–π‘‘π‘˜β€–2+ (1 βˆ’ πœ‡)β€–π‘‘π‘˜β€–2

β€–πΉπ‘˜β€–2] (41)

Step 6: if π‘§π‘˜ ∈ 𝛺 and ‖𝐹(π‘§π‘˜)β€– ≀ πœ– stop, else compute the point π‘₯π‘˜+1 from (6).

Step 7: let π‘˜ = π‘˜ + 1 and go to step 1.”

4. GLOBAL CONVERGENCE

In the previous section, we gave a preface to the proof of convergence condition by establishing the property of sufficient descent through lemmas 3.1 and 3.2. In the beginning, there are a set of assumptions that we mention in this section, and then we move on to theories. Adding several lemmas for the step length of the new algorithm. Now we need some assumption, to begin with, the proof of convergence condition, which is illustrated thus:

(7)

4.1. Assumption

Suppose 𝐹 fulfills the following assumptions:

a) The solution group of (2) is non-empty.

b) The function 𝐹 is Lipschitz continuous, i.e.,

‖𝐹(π‘₯) βˆ’ 𝐹(𝑦)β€– ≀ 𝐿‖π‘₯ βˆ’ 𝑦‖ , βˆ€ π‘₯, 𝑦 ∈ ℝ𝑛 (42)

c) 𝐹 satisfies,

〈𝐹(π‘₯) βˆ’ 𝐹(𝑦), π‘₯ βˆ’ 𝑦βŒͺ β‰₯ 𝑐‖π‘₯ βˆ’ 𝑦‖2 , βˆ€ π‘₯, 𝑦 ∈ ℝ𝑛, c>0 (43) 4.2. Lemma 1

Assume (π‘₯Μ… ∈ ℝ𝑛) satisfy 𝐹(π‘₯Μ…) = 0 and {π‘₯} is generated by the new algorithm GDDL-CG.

If lemmas 3.1 and 3.2, hold, then β€–π‘₯π‘˜+1βˆ’ π‘₯Μ…β€–2≀ β€–π‘₯π‘˜βˆ’ π‘₯Μ…β€–2βˆ’ β€–π‘₯π‘˜+1βˆ’ π‘₯π‘˜β€–2 , and

βˆ‘βˆžπ‘˜=0β€–π‘₯π‘˜+1βˆ’ π‘₯π‘˜β€–< ∞ (44)

4.3. Lemma 2

Suppose {π‘₯} is generated by the new algorithm GDDL-CG then:

π‘˜β†’βˆžlimπ›Όπ‘˜β€–π‘‘π‘˜β€– = 0 (45)

Proof: since the sequence {β€–π‘₯π‘˜βˆ’ π‘₯Μ…β€–} is not increasing; {π‘₯π‘˜} is bounded, and π‘™π‘–π‘š

π‘˜β†’βˆžβ€–π‘₯π‘˜+1βˆ’ π‘₯π‘˜β€– = 0. From (3) using a line search, we have:

β€–π‘₯π‘˜+1βˆ’ π‘₯π‘˜β€– =|𝐹(π‘§π‘˜)𝑇(π‘₯βˆ’π‘§π‘˜)|

‖𝐹(π‘§π‘˜)β€–2 ‖𝐹(π‘§π‘˜)β€– =|π›Όπ‘˜πΉ(π‘§π‘˜)π‘‡π‘‘π‘˜|

‖𝐹(π‘§π‘˜)β€– β‰₯ π›Όπ‘˜β€–π‘‘π‘˜β€– β‰₯ 0 (46) Then the proof is completed.”

4.4. Theorem

Let {π‘₯π‘˜} and {π‘§π‘˜} be the sequences generated by the new algorithm GDDL-CG then:

π‘™π‘–π‘š 𝑖𝑛𝑓

π‘˜β†’βˆž

‖𝐹(π‘₯π‘˜)β€– = 0 (47)

Proof: case I: if π‘™π‘–π‘š 𝑖𝑛𝑓 β€–π‘‘π‘˜β€–

π‘˜β†’βˆž

= 0, then π‘™π‘–π‘š 𝑖𝑛𝑓

π‘˜β†’βˆž

‖𝐹(π‘₯π‘˜)β€– = 0. The sequence {π‘₯π‘˜} has some accumulation point π‘₯Μ… such that 𝐹(π‘₯Μ…) = 0. Hence, {β€–π‘₯π‘˜βˆ’ π‘₯Μ…β€–} converges to π‘₯Μ…. Case II: if π‘™π‘–π‘š 𝑖𝑛𝑓 β€–π‘‘π‘˜β€–

π‘˜β†’βˆž

> 0, then π‘™π‘–π‘š 𝑖𝑛𝑓

π‘˜β†’βˆž

‖𝐹(π‘₯π‘˜)β€– > 0. Hence π‘™π‘–π‘š

π‘˜β†’βˆžπ›Όπ‘˜= 0. Using the line search.

βˆ’πΉ(π‘₯π‘˜+ π›Ύπ‘Ÿπ‘šπ‘˜π‘‘π‘˜)π‘‡π‘‘π‘˜> πœŽπ›Ύπ‘Ÿπ‘šπ‘˜[πœ‡β€–π‘‘π‘˜β€–2+ (1 βˆ’ πœ‡)β€–π‘‘π‘˜β€–2

β€–πΉπ‘˜β€–2] (48)

And the boundedness of {π‘₯π‘˜}, {π‘‘π‘˜}, yields

βˆ’πΉ(π‘₯Μ…)π‘‡π‘‘ΜŒ ≀ 0 (49)

From (17) and (23) we get:

βˆ’πΉ(π‘₯Μ…)π‘‡π‘‘ΜŒ β‰₯ πœŒπ‘– ‖𝐹(π‘₯Μ…)β€–2> 0 (50)

The (31) and (32) indicates a contradiction for 𝑖 = 1, 2. So, π‘™π‘–π‘š 𝑖𝑛𝑓

π‘˜β†’βˆž

‖𝐹(π‘₯π‘˜)β€– > 0 does not hold and the proof is complete.

5. NUMERICAL PERFORMANCE

In this section, we present our numerical results that explain the importance of the new algorithm GDDL-CG compared to the MDL-CG algorithm [19] using Matlab R2018b program in a laptop calculator with its CoreTMi5 specifications. The program finds the results on several non-derivative functions through several two initial points. In Table 1, we review the details of the initial points used to compare algorithms as shown in Table 1.

Referensi

Dokumen terkait