general, does not converge. To overcome these drawback, an extragradient method consisting of two projections, has been introdued by Korpelevich [14], to solve con- vex optimization and saddle point problems. Since then, several authors have de- veloped to improve the efficiency of this method by many ways, for example, [3, 18]
and the references therein.
In 2010, Zykina and Melenchuk [31] proposed a two-step extragradient method with a suitable fixed step size for variational inequality problems in Euclidean spaces. In this method, three projections on feasible set needed to be computed per each iteration. In 2015, Nguyen et al. [23] introduced a more general version process for solving a variational inequality problem in the real Euclidean space by making use of two sequences of suitable variant step sizes.
In 2018, Anh and Hieu [2] proposed Mann multi-step proximal-like algorithm, which combines the multi-step proximal-like method with Mann-like iteration. They obtained the weak convergent result in a real Hilbert space when the bifuntion is Lipschitz-type continuous and pseudomonotone.
On the other hand, an inertial-type algorithm was first proposed by Polyak [25]
as an acceleration process in solving a smooth convex minimisation problem. An inertial-type algorithm is a two-step iterative method in which the next iterate is de- fined by making use of the previous two iterates. It is well known that incorporating an inertial term in an algorithm speeds up the rate of convergence of the sequence generated by the algorithm. Consequently, a lot of research interest is recently devoted to the inertial-type algorithm (see e.g. [5, 6, 13, 16, 21] and the references therein).
Motivated and inspirated by the recent interest on inertial-type algorithm and the work in [2, 23], we propose an algorithm based upon the proximal-point and inertial term extrapolation step for solving pseudomonotone and Lipschitz-type continuous equilibrium problems in a real Hilbert space. Under some appropriate conditions on the bifunctions involving pseudomonotone and Lipschitz-type continuous the weak convergence theorem of the proposed algorithm is established. In addition, we give a numerical experiment to illustrate the performance of our algorithm.
2. Preliminaries
LetC be a nonempty closed convex subset ofH. We begin with some concepts of monotonicity of a bifunction, see [22] for more details. A bifunctionf :H × H →R is said to be:
(i) pseudomonotone onC if
f(x, y)≥0⇒f(y, x)≤0, ∀x, y∈C,
(ii) Lipschitz-type continuous on H, if there exist positive numbers c1 and c2 such that
f(x, y) +f(y, z)≥f(x, z)−c1∥x−y∥2−c2∥y−z∥2, ∀x, y, z∈ H.
Assumption 2.1. In what follows, we suppose the following assumptions for the convex setC and the bifunctionf :H × H →Ras below:
(A1) f is pseudomonotone on C and f(x, x) = 0 for allx∈C, (A2) f is Lipschitz-type continuous on H,
(A3) f(x,·) is convex and subdifferentiable on C for every fixed x∈C, (A4) The solution set EP(f, C) is nonempty,
(A5) f(·, y) is weakly sequencially upper semicontinuous on C with every fixed y ∈ C, i.e., lim supn→∞f(xn, y) ≤ f(x, y) for each sequence {xn} ⊂ C converging weakly tox.
Recall that the metric projection PC :H →C is defined by PCx= argmin{∥y−x∥:y∈C}.
Since C is nonempty, closed and convex,PCx exists and is unique. It is also known that PC has the following characteristic properties.
Lemma 2.2 ([28]). Let PC :H →C be the metric projection fromHontoC. Then (i) For all x∈C, y ∈ H,
∥x−PCy∥2+∥PCy−y∥2≤ ∥x−y∥2, (ii) z=PCx if and only if
⟨x−z, z−y⟩ ≥0, y∈C.
Let g:C→Rbe a function. The subdifferential of g atx is defined by
∂g(x) ={w∈ H |g(y)−g(x)≥ ⟨w, y−x⟩, ∀y∈C}.
A function φ : H → R is called weakly lower semicontinuous at x ∈ H if for any sequence{xn}inHconverges weakly toxthenφ(x)≤lim infn→∞φ(xn). It is well- known that the functionalφ(x) :=∥x∥2is convex and weakly lower semicontinuous.
Recall the proximal mapping of a proper, convex and lower semicontinuous func- tion g:C →R with a parameterλ >0 as follows:
proxλg(x) = arg min {
λg(y) +1
2∥x−y∥2 :y∈C }
, x∈ H. Lemma 2.3 ([17]). Assume φn∈[0,∞) and δn∈[0,∞) satisfy:
(i) φn+1−φn≤θn(φn−φn−1) +δn, (ii) ∑∞
n=1δn<∞,
(iii) {θn} ⊂[0, θ], where θ∈(0,1).
Then the sequence {φn} is convergent and ∑∞
n=1[φn−φn−1]+ < ∞, where [t]+ = max{t,0} (for any t∈R).
3. An inertial multi-step proximal-like algorithm
In this section, we propose a weakly convergent algorithm which is called the Inertial Multi-step Proximal-like Algorithm (IMPA). The IMPA is designed as fol- lows:
Algorithm 3.1 ([IMPA]). Initialization. Choose sequence {ϵn} ⊂[0,+∞) satis- fying ∑∞
n=0ϵn<∞. Select arbitrary pointsx1, x0 ∈C and the control parameters λn>0,ρn>0 and θ∈(0,1). Setn:= 1.
Step 1. Give the iterates xn−1 and xn,n ≥ 1, chooseθn such that 0≤ θn ≤θ¯n, where
θ¯n=
min
θ, ϵn max
{∥xn−xn−1∥,∥xn−xn−1∥2}
ifxn̸=xn−1,
θ Otherwise.
(3.1)
Step 2. Compute wn=xn+θn(xn−xn−1). Compute
yn=proxλnf(wn,·)(wn), zn=proxρnf(yn,·)(yn), xn+1=proxρnf(zn,·)(wn).
Setn:=n+ 1 and go back Step 1.
Remark 3.2. We remark here thatStep 1. in Algorithm 3.1 is easily implemented in numerical computation since the value of ∥xn−xn−1∥ is a priori known before choosing θn. Furthermore, observe that by the assumption that ∑∞
n=0ϵn<∞, we have that∑∞
n=0θn∥xn−xn−1∥<∞ and ∑∞
n=0θn∥xn−xn−1∥2<∞.
In order to prove the convergence of Algorithm 3.1, we need the following lemmas.
Lemma 3.3. For eachn∈Nand every y ∈C, the iterates yn, zn, xn+1 generated by Algorithm 3.1 satisfying the following inequalities
(a) λnf(wn, y)−λnf(wn, yn)≥ ⟨wn−yn, y−yn⟩, (b) ρnf(yn, y)−ρnf(yn, zn)≥ ⟨yn−zn, y−zn⟩,
(c) ρnf(zn, y)−ρnf(zn, xn+1)≥ ⟨wn−xn+1, y−xn+1⟩, where λn≥0 and ρn≥0.
Proof. We only prove inequality (c), the proofs of (a) and (b) being similar. Let n ∈ N. By definition of xn+1, we have wn−xn+1 −ρnωn ∈ NC(xn+1), where ωn ∈∂2f(yn, xn+1)≡∂f(yn,·)(xn+1) and NC(xn+1) denotes the normal cone of C atxn+1, namely
NC(xn+1) ={z∈ H | ⟨z, y−xn+1⟩ ≤0, ∀y∈C}. This implies that
⟨wn−xn+1, y−xn+1⟩ ≤ ⟨ρnωn, y−xn+1⟩, y∈C.
Since ωn∈∂2f(yn, xn+1), we obtain
f(zn, y)≥f(zn, xn+1) +⟨ωn, y−xn+1⟩, y∈C.
Therefore, we have
ρnf(zn, y)≥ρnf(zn, xn+1) +⟨wn−xn+1, y−xn+1⟩, y∈C.
□ Lemma 3.4. For every x∗ ∈EP(f, C), y ∈ C and for all n ∈ N, the iterates zn
and xn+1 generated by Algorithm 3.1 satisfy the following inequality
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−xn+1∥2−2ρnf(zn, xn+1).
Proof. Substituting x∗ fory in Lemma 3.3 (c) and using the equality
∥wn−x∗∥2 =∥wn−xn+1∥2+∥xn+1−x∗∥2+ 2⟨wn−xn+1, xn+1−x∗⟩, we obtain that
∥wn−x∗∥2− ∥wn−xn+1∥2− ∥xn+1−x∗∥2≥2ρnf(zn, xn+1)−2ρnf(zn, x∗).
Since f(x∗, zn) ≥ 0 and f is pseudomonotone, we have f(zn, x∗) ≤ 0, and thus
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−xn+1∥2−2ρnf(zn, xn+1). □ In the next two theorems, we prove that for every x∗ ∈EP(f, C), the sequence {∥xn−x∗∥} is nonincreasing.
Theorem 3.5. Suppose that for some n∈N, the following inequality holds f(wn, xn+1)−f(wn, yn)≤0.
Then, for every x∗ ∈EP(f, C), we have
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2−(1−2ρnc2)∥xn+1−zn∥2
−(1−2ρnc1)∥zn−yn∥2− ∥yn−wn∥2. (3.2)
Proof. Since f is Lipschitz-type continuous, we have that
f(zn, xn+1)≥f(yn, xn+1)−f(yn, zn)−c1∥zn−yn∥2−c2∥xn+1−zn∥2. So, it follows from Lemma 3.4 that
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−xn+1∥2−2ρn[f(yn, xn+1)−f(yn, zn)]
+ 2ρnc1∥zn−yn∥2+ 2ρnc2∥xn+1−zn∥2. (3.3)
Substituting xn+1 fory in Lemma 3.3 (b), we can write
(3.4) ρn[f(yn, xn+1)−f(yn, zn)]≥ ⟨zn−yn, zn−xn+1⟩. So, from (3.3) and (3.4), we obtain that
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−xn+1∥2−2⟨zn−yn, zn−xn+1⟩ + 2ρnc1∥zn−yn∥2+ 2ρnc2∥xn+1−zn∥2.
(3.5)
Using successively the following equalities
∥xn+1−wn∥2 =∥xn+1−zn∥2− ∥zn−wn∥2+ 2⟨xn+1−zn, zn−wn⟩ and
⟨zn−yn, zn−xn+1⟩=⟨zn−wn, zn−xn+1⟩+⟨wn−yn, zn−xn+1⟩, we deduce from (3.5) that
∥xn+1−x∗∥2≤ ∥wn−x∗∥2−(1−2ρnc2)∥xn+1−zn∥2− ∥zn−wn∥2
−2⟨wn−yn, zn−xn+1⟩+ 2ρnc1∥zn−yn∥2. (3.6)
Observing that ∥zn−wn∥2 = ∥zn−yn∥2+∥yn−wn∥2 + 2⟨zn−yn, yn−wn⟩, we obtain from (3.6) that
∥xn+1−x∗∥2≤ ∥xn−x∗∥2−(1−2ρnc2)∥xn+1−zn∥2−(1−2ρnc1)∥zn−yn∥2
− ∥yn−wn∥2−2⟨yn−wn, xn+1−yn⟩. (3.7)
Substituting xn+1 fory in Lemma 3.3 (a) and using the assumption, we can write (3.8) ⟨wn−yn, xn+1−yn⟩ ≤λn[f(wn, xn+1)−f(wn, yn)]≤0.
So, gathering (3.7) and (3.8), we obtain the desired inequality. □ Theorem 3.6. Suppose that for somen∈N, the following inequalities hold
f(wn, xn+1)−f(wn, yn)>0 and 0< λn≤ρn. Then, for every x∗ ∈EP(f, C), we have
∥xn+1−x∗∥2≤ ∥wn−x∗∥2−(1−2ρnc2)∥wn−yn∥2−(1−4ρnc2)∥xn+1−yn∥2
−(2−4ρnc1−6ρnc2)∥zn−yn∥2. (3.9)
Proof. It follows from Lemma 3.4 that
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−xn+1∥2−2ρnf(zn, xn+1).
Consequently, rewriting∥wn−xn+1∥2 with respect to∥wn−yn∥2and∥yn−xn+1∥2, we deduce the following inequality
∥xn+1−x∗∥2≤∥wn−x∗∥2− ∥wn−yn∥2− ∥yn−xn+1∥2
−2⟨wn−yn, yn−xn+1⟩ −2ρnf(zn, xn+1).
(3.10)
Substituting xn+1 fory in Lemma 3.3 (a) gives
⟨wn−yn, xn+1−yn⟩ ≤λnf(wn, xn+1)−λnf(wn, yn).
Using this inequality in (3.10), we can deduce
∥xn+1−x∗∥2 ≤∥wn−x∗∥2− ∥wn−yn∥2− ∥yn−xn+1∥2+ 2λnf(wn, xn+1)
−2λnf(wn, yn)−2ρnf(zn, xn+1).
(3.11)
Substituting yn for y in Lemma 3.3 (b) and remembering that f(yn, yn) = 0, we can write
∥yn−zn∥2≤ρnf(yn, yn)−ρnf(yn, zn) =−ρnf(yn, zn) and thus,
(3.12) 0≤ −∥yn−zn∥2−ρnf(yn, zn).
Multiplying (3.12) by 2, and adding to (3.11), we get
∥xn+1−x∗∥2≤ ∥wn−x∗∥2− ∥wn−yn∥2− ∥yn−xn+1∥2+ 2λnf(wn, xn+1)
−2λnf(wn, yn)−2ρnf(zn, xn+1)−2ρnf(yn, zn)−2∥zn−yn∥2. (3.13)
Since, by assumption, f(wn, xn+1)−f(wn, yn) > 0 and 0 < λn ≤ ρn, we deduce from the previous inequality that
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2− ∥wn−yn∥2− ∥yn−xn+1∥2+ 2ρnf(wn, xn+1)
−2ρnf(wn, yn)−2ρnf(zn, xn+1)−2ρnf(yn, zn)−2∥zn−yn∥2. (3.14)
Now, using the Lipschitz-type continuity of f, first withx=wn,y=zn,z=xn+1, and after with x=wn,y=yn,z=zn, we obtain that
f(wn, xn+1)−f(zn, xn+1)−f(wn, yn)−f(yn, zn)
≤f(wn, zn) +c1∥wn−zn∥2+c2∥zn−xn+1∥2−f(wn, yn)−f(yn, zn)
=f(wn, zn)−f(wn, yn)−f(yn, zn) +c1∥wn−zn∥2+c2∥zn−xn+1∥2
=c1∥wn−yn∥2+c2∥yn−zn∥2+c1∥wn−zn∥2+c2∥zn−xn+1∥2. (3.15)
This implies, from (3.14), that
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2−(1−2ρnc1)∥wn−yn∥2− ∥yn−xn+1∥2
−(2−2ρnc2)∥yn−zn∥2+ 2ρnc1∥wn−zn∥2+ 2ρnc2∥xn+1−zn∥2. (3.16)
Using twice the inequality (a+b)2 ≤2(a2+b2) valid for everya, b∈R, we get (3.17) ∥wn−zn∥2 ≤2∥wn−yn∥2+ 2∥yn−zn∥2
and
(3.18) ∥xn+1−zn∥2≤2∥xn+1−yn∥2+ 2∥yn−zn∥2. From (3.16), (3.17) and (3.18), we have
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2−(1−2ρnc1)∥wn−yn∥2− ∥yn−xn+1∥2
−(2−2ρnc2)∥yn−zn∥2+ 4ρnc1∥wn−yn∥2+ 4ρnc1∥yn−zn∥2 + 4ρnc2∥xn+1−yn∥2+ 4ρnc2∥yn−zn∥2.
Hence
∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2−(1−6ρnc1)∥wn−yn∥2−(1−4ρnc2)∥xn+1−yn∥2
−(2−4ρnc1−6ρnc2)∥yn−zn∥2.
□ Let x∗ ∈ EP(f, C). Our aim is now to use inequalities (3.2) and (3.9) to give sufficient conditions on the sequences {λn}and {ρn} to obtain that for alln (3.19) ∥xn+1−x∗∥2 ≤ ∥wn−x∗∥2−b∥wn−yn∥2−d∥yn−zn∥2
where banddare positive numbers. First we consider the case whenλn= 0 for all n.
Proposition 3.7. Let x∗∈EP(f, C) and suppose that for all n, λn= 0 and 0< ρn≤ρ∗<min
{ 1 2c1, 1
2c2 }
.
Then, there exist b >0 and d >0 such that inequality (3.19)holds for all n.
Proof. Since λn = 0 for all n, we have that yn =wn for all n. So, from Theorem 3.5, we can write that for all n,
(3.20) ∥xn+1−x∗∥2≤ ∥wn−x∗∥2−(1−2ρnc2)∥xn+1−zn∥2−(1−2ρnc1)∥zn−yn∥2. On the other hand, by assumption, we have that
1−2ρnc1≥1−2ρ∗c1 >0 and 1−2ρnc2 ≥1−2ρ∗c2 >0.
Consequently, since yn = wn, we can conclude from (3.20) that (3.19) holds for
instance b= 1 and d= 1−2ρ∗c1. □
Using Theorem 3.5 and Theorem 3.6, we obtain a similar result whenλn>0 for all n.
Proposition 3.8. Let x∗ ∈EP(f, C) and suppose that for alln, 0< λn< ρn≤ρ∗<min
{ 1 6c1
, 1 4c2
, 1
2c1+ 3c2
} .
Then, there exist b >0 and d >0 such that inequality(3.19) holds for alln.
Proof. We consider two cases. When f(wn, xn+1)−f(wn, yn) ≤0, it follows from Theorem 3.5 that (3.2) is satisfied. Hence, since
ρ∗ < 1 6c1 < 1
2c1andρ∗ < 1 4c2 < 1
2c2,
we obtain immediately that (3.19) is satisfied with 0< b≤1 and 0< d≤1−ρ∗c1. On the other hand, when f(wn, xn+1)−f(wn, yn)>0, it follows from Theorem 3.6 that (3.9) is satisfied. Hence, since
ρ∗ <min { 1
6c1, 1
4c2, 1 2c1+ 3c2
} , we have that (3.19) is satisfied with
0< b≤1−6ρ∗c1 and 0< d≤2−4ρ∗c1−6ρ∗c2.
Consequently, whatever the sign off(wn, xn+1)−f(wn, yn), (3.19) is satisfied with b= 1−6ρ∗c1 and d= min{1−2ρ∗c1,2−4ρ∗c1−6ρ∗c2}. □ Finally, we are now in a position to give the convergence of the sequence generated by Algorithm 3.1.
Theorem 3.9 (Weakly convergent theorem). Suppose that Conditions (A1)-(A5), 0 < ρn ≤ ρ∗ < min
{ 1 6c1,4c1
2,2c 1
1+3c2
}
and 0 ≤ λn ≤ ρn for all n. In addition, suppose that lim inf
n→∞ ρn > 0. Then, the sequence {xn} generated by Algorithm 3.1 converges weakly to a point p∈EP(f, C). Moreover, p= limn→∞PEP(f,C)(xn).
Proof. It follows from (3.19) and definition ofwn that
∥xn+1−x∗∥2≤ ∥wn−x∗∥2
≤ ∥xn−x∗∥2+θn(∥xn−x∗∥2− ∥xn−1−x∗∥2) + 2θn∥xn−xn−1∥2. Since ∑∞
n=1θn∥xn−xn−1∥2 <∞, we deduce from Lemma 2.3 that {∥xn−x∗∥} is convergent. Thus,{xn} is bounded, so is{wn}.
Besides, we obtain
∥wn−x∗∥2 =∥xn+θn(xn−xn−1)−x∗∥2≤(∥xn−x∗∥+θn∥xn−xn−1∥)2
=∥xn−x∗∥2+ 2θn∥xn−x∗∥∥xn−xn−1∥+θ2n∥xn−xn−1∥2
≤ ∥xn−x∗∥2+ 2θn∥xn−x∗∥∥xn−xn−1∥+θn∥xn−xn−1∥2
≤ ∥xn−x∗∥2+ 3M θn∥xn−xn−1∥,
whereM = supn∈N{∥xn−x∗∥,∥xn−xn−1∥}. Thus, by (3.19)
0≤b∥wn−yn∥2+d∥yn−zn∥2≤ ∥wn−x∗∥2− ∥xn+1−x∗∥2
≤ ∥xn−x∗∥2+ 3M θn∥xn−xn−1∥ − ∥xn+1−x∗∥2.
Since ∑∞
n=1θn∥xn −xn−1∥ < ∞ and since b > 0 and d > 0, we obtain that
∥wn−yn∥ →0 and∥zn−yn∥ →0 as n→ ∞. Let us consider
∥xn−yn∥ ≤ ∥xn−wn∥+∥wn−yn∥ ≤θn∥xn−xn−1∥+∥wn−yn∥. By the assumption that ∑∞
n=1θn∥xn−xn−1∥ <∞, we get that ∥xn−yn∥ →0 as n→ ∞.
Now, assume that p is some weak cluster points of {xn}. Without loss of gen- erality, we can write xn ⇀ p as n → ∞. Since C is convex and closed, C is weakly closed. Hence p ∈C. Moreover, from ∥wn−yn∥ → 0,∥zn−yn∥ → 0 and
∥xn−yn∥ →0, we also obtain wn, yn, zn⇀ p whenn→ ∞. Substitutingxn+1 for y in Lemma 3.3 (c), we have that
(3.21) ρnf(zn, xn+1)≥ρnf(zn, xn+1) +⟨wn−xn+1, y−xn+1⟩,∀y∈C.
Using the Lipschitz-type continuous condition of f, we get
(3.22) f(zn, xn+1)≥f(yn, xn+1)−f(yn, zn)−c1∥yn−zn∥2−c2∥zn−xn+1∥2. Also, substituting xn+1 fory in Lemma 3.3 (b), we have
(3.23) ρnf(yn, xn+1)−f(yn, zn)≥ ⟨yn−zn, xn+1−zn⟩.
Combining (3.21), (3.22) and (3.23), and after that, dividing both of two sides of the obtained inequality by ρn>0, we get, for all y∈C and n≥0,
f(zn, y)≥ 1
ρn⟨wn−xn+1, y−xn+1⟩+ 1
ρn⟨yn−zn, xn+1−zn⟩
−c1∥yn−zn∥2−c2∥zn−xn+1∥2. (3.24)
Thus, passing to the limit in (3.24) and using ∥wn−yn∥ →0,∥zn−yn∥ →0 and
∥xn−yn∥ →0, lim inf
n→∞ ρn>0, (A5) and zn⇀ p, we getf(p, y)≥0 for all y∈C or p∈EP(f, C). Now, using again relation (3.19) withx∗ =p we obtain
∥xn+1−p∥2≤ ∥wn−p∥2−b∥wn−yn∥2−d∥yn−zn∥2 ≤ ∥wn−p∥2
≤ ∥xn−p∥2+ 3M θn∥xn−xn−1∥.
Thus, since p is a weak cluster point of {xn}, the whole sequence {xn} converges weakly to p when n → ∞. Moreover p = limn→∞PEP(f,C)(xn). Theorem 3.9 is
proved. □
4. Computational experiments
In this section, apply Algorithm 3.1 to solve an equilibrium model arising from Nash-Cournot oligopolistic EPs of electricity markets. This model has been inves- tigated in some research papers (see e.g. [7, 26]). To test the algorithm, we take the example in [26]. In this example, there are nccompanies, each company imay possess Ii generating units. Let x denote the vector whose entry xi stands for the power generating by unit i. Following [7, 26] we suppose that the price p is a de- creasing affine function of the σ with σ = ∑ng
i=1xi where ng is the number of all generating units, that is p(x) = 378.4−2∑ng
i=1xi =p(σ).Then the profit made by
company i is given by fi(x) = p(σ)∑
j∈Ii−∑
i∈Iicj(xj), where cj(xj) is the cost for generatingxj. As in [26] we suppose that the costcj(xj) is given by
cj(xj) := max{c0j(xj), c1j(xj)}
with
c0j(xj) := α0j
2 x2j +βj0xj+γj0, c1j(xj) :=α1jxj+ β1j β1j + 1γ
−1 β1 j
j (xj)
(β1 j+1) β1
j , where αkj, βjk, γjk(k = 0,1) are given parameters. Let xminj and xmaxj be the lower and upper bounds for the power generating by unitj. Then the strategy set of the model takes the formC :={x= (x1, ..., xng)T :xminj ≤xmaxj ∀j}.
Let us introduce the vectorqi := (qji, ..., qniq) with
qij := 1,ifj∈Ii, and qij = 0,otherwise, and then define
A:= 2
nc
∑
i=1
(1−qji)(qi)T, B:= 2
nC
∑
i=1
qi(qi)T,
a:=−387.4
nc
∑
i=1
qi, c(x) :=
ng
∑
j=1
cj(xj).
Then the oligopolistic equilibrium model under consideration can be formulated by the following EP (see [26] Lemma 7):
x∗ ∈C :f(x, y) :=
((
A+3 2B
) x+1
2By+a )T
(y−x) +c(y)−c(x)≥0 ∀y∈C.
We test Algorithm 3.1 for this problem with corresponds to the first model in [7] where companies (nc = 3) are considered, and the parameters are given in Table 1 and Table 2. To terminate the Algorithms, we use the stopping criteria
∥xn+1−xn∥ ≤ ϵwith a tolerance ϵ= 10−3. The computational result is reported in Table 3 with some starting points and regularization parameters.
Table 1. The lower and upper bounds for the power generation of the generating units and companies.
Com. Gen. xgmin xgmax xcmin xcmax
1 1 0 80 0 80
2 2 0 80 0 130
2 3 0 50 0 130
3 4 0 55 0 125
3 5 0 30 0 125
3 6 0 40 0 125
Table 2. The parameters of the generating unit cost functions.
Gen. α0j βj0 γj0 α1j βj1 γj1 1 0.0400 2.00 0.00 2.0000 1.0000 25.0000 2 0.0350 1.75 0.00 1.7500 1.0000 28.5714 3 0.1250 1.00 0.00 1.0000 1.0000 8.0000 4 0.0116 3.25 0.00 3.2500 1.0000 86.2069 5 0.0500 3.00 0.00 3.0000 1.0000 20.0000 6 0.0500 3.00 0.00 3.0000 1.0000 20.0000
Example 4.1. In this experiment, we compare Algorithm 3.1 with the General Extragradient Algorithm (GEA) in Nguyen et al. [23] and Anh and Hieu Algo- rithm [2], Algorithm 1. We choose the same starting points x0 = (0,0,0,0,0,0)T, x1 = (30,20,10,15,10,10)T and parameters ρn = 6.01c1
1 and λn = 0.8ρn for every algorithm. We choose αn = n+11 for Anh and Hieu Algorithm. For Algorithm 3.1, we choose ϵn= n1.11 ,θ∈[0,1) andθnsuch that (3.1) is satisfied. Table 3 shows the results in this case.
Table 3. Comparison: Algorithm 3.1 with different θ, Anh and Hieu Algorithm [2], and Nguyen et al. Algorithm [23].
Algorithm θ Iter(n) CPU (s)
0.9 237 107.5469 0.7 390 195.6719
Algorithm 3.1 0.5 714 386.3438
0.3 1047 569.5156 0.1 1376 796.8125 Anh and Hieu Algorithm [ [2], Algorithm 1] 0 2408 952.8281
Nguyen et al. Algorithm [23] 0 2401 1.0276×103
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Number of Iterations 10-3
10-2 10-1 100 101
||xn+1-xn||
IMPA (θ=0.9) IMPA (θ=0.7) IMPA (θ=0.5) IMPA (θ=0.3) IMPA (θ=0.1) MPA GEA
Figure 1. Comparison of Algorithm 3.1([IMPA]) with different θ, Anh and Hieu Algorithm [2]([MPA]), and Nguyen et al. Algorithm [23]([GEA]).