• Tidak ada hasil yang ditemukan

2.5 Review on some optimization problems

2.5.1 Variational inequality and fixed point problems

The following are the properties of the resolvent (see [21])

⟨QBµx−x, J(x−QBµx)⟩ ≥0, x∈ X, x ∈B−1(0), (2.4.10) in particular, if X is a real Hilbert space, then

⟨JµBx−x, x−JµBx⟩ ≥0, x∈ X, x ∈B−1(0),

where JµB = (I+µB)−1 is the general resolvent,B−1(0) = {z ∈ X : 0∈Bz} is the set of null points of B. Also, we know that B−1(0) is closed and convex (see [237]).

Lemma 2.4.21. [44] Let B :H →2H be a maximal monotone mapping, and A:H → H be a Lipschitz continuous and monotone mapping. Then the mapping A+B is a maximal monotone.

conditions, two common methods are used namely; the projection method and the regu- larized method. To use these methods, a certain level of monotonicity is required from the cost operator. In this study, we focus on approximating the solution of the VIP (1.2.1) using the projection method. Several authors have proposed and studied the projection type algorithms for approximating the solutions of VIPs (1.2.1) (see [9, 156] and other references therein).

In a real Hilbert space H, the following fixed point theory characterizes the solution set of the VIP: For λ >0,a point x is a solution of the VIP if and only if

x =PC(x−λAx),

wherePC is the metric projection ofH ontoC.The simplest algorithm for solving the VIP (1.2.1) is the gradient projection method given by

Algorithm 2.5.1.

xn+1 =PC(xn−λAxn), n≥1,

whereλ >0.This method involves only one projection onto the feasible setC per iteration.

The weak convergence result of this method was obtained under certain strict conditions.

This method is only effective for solving VIP (1.2.1) whenAis either strongly monotone or inverse strongly monotone. To circumvent this limitation, Korpelevich [155] proposed an extragradient method for solving the VIP (1.2.1) in Euclidean spaces when Ais monotone and L-Lipschitz continuous. The extragradient method is defined as follows:

Algorithm 2.5.2.

(yn =PC(xn−λAxn), n≥1

xn+1 =PC(xn−λAyn), (2.5.2)

where λ ∈ (0,L1) and PC is the metric projection from H onto C. If the solution set V I(C, A) is nonempty, then the sequence {xn} generated by (2.5.2) converges to an el- ement in V I(C, A). The extragradient method involves two projections onto the feasible set C per iteration. Computing projection onto an arbitrary closed convex set might be computationally expensive, which could be a barrier to the implementation of the extra- gradient method and its variants. To improve on the extragradient method, authors try to minimize the number of evaluations of the projection map PC per iteration. Censor et.

al [58] initiated a study in this direction and proposed a new method called subgradient extragradient method. The authors replaced the second projection onto C in Algorithm (2.5.2) by a projection onto a specific constructible half space to come up with the following algorithm:

Algorithm 2.5.3.









x1 ∈ H,

yn =PC(xn−λnAxn),

Tn={w∈ H:⟨xn−λnAxn−yn, w−yn⟩ ≤0}, xn+1 =PTn(xn−λnAyn), n≥1,

(2.5.3)

where H is a real Hilbert space. Algorithm (2.5.3) is less computationally expensive and more efficient than Algorithm (2.5.2). The authors proved that Algorithm (2.5.3) con- verges weakly to an element in V I(C, A) ̸= ∅. To improve on Algorithm (2.5.3), Tseng [253] proposed and studied a forward-backward method (also known as Tseng’s extragra- dient method) which requires only one projection per iteration. The proposed method is presented as follow:

Algorithm 2.5.4.

(yn=PC(xn−λAxn)

xn+1 =yn−λ(Ayn−Axn), ∀ n≥0, (2.5.4)

whereAis monotone, L-Lipschitz continuous andλ∈ 0,L2

.The author proved that the sequence{xn}generated by Algorithm (2.5.4) converges weakly to the solution set of VIP (1.2.1) under the assumption that V I(C, A)̸=∅.

The step size of all the algorithms above require prior knowledge of the Lipschitz constant of the monotone operator, which is often difficult to calculate or estimate. Recently, Yang and Liu [266] proposed and studied a Tseng’s extragradient method combined with the Moudafi viscosity scheme, which does not require prior knowledge of the Lipschitz constant of the monotone operator. The authors proved strong convergence result for the proposed algorithm. Very recently, Shehu and Iyiola [223] proposed an algorithm which combines the viscosity method and the subgradient extragradient method for solving the VIP. They proved that the sequence generated by their algorithm converges strongly to a point in the solution set under appropriate conditions.

Motivated by the Tseng’s method and the importance of studying the VIPs and FPPs, Yin et.al [270] proposed and studied a Tseng’s type algorithm where A : H → H is quasimonotone, Lipschitz continuous and sequentially weakly continuous, and the mapping T is pseudocontractive. The authors were only able to obtain a weak convergence result of their method (see Appendix 3.5.17) to some point x ∈V I(C, A)∩F(T) .

Chidume and Nnakwe [75] extended the study of the subgradient extragradient method from the framework of a real Hilbert space to the framework of a 2-uniformly convex and uniformly smooth Banach space. The authors proposed and studied the following method for solving the VIP (1.2.1):

Algorithm 2.5.5.









x0 ∈ X,

yn = ΠCJ−1(J xn−τ Axn),

Tn ={w∈ X :⟨w−yn, J xn−τ Axn−J yn⟩ ≤0}, xn+1 = ΠTnJ−1(J xn−τ Ayn),

(2.5.5)

where ΠC : X → C is the generalized projection of the Banach space, J : X → 2X the normalized duality mapping andτ >0.Using this proposed method, the authors obtained a weak convergence result.

Recently, Cai et al. [53] proposed an algorithm for solving the variational inequalities involving monotone and Lipschitz continuous mapping in Banach spaces. The proposed algorithm by the authors is presented as follows:

Algorithm 2.5.6.

Step 0: Letu∈ X be a given starting point . Setn = 1.

Step 1: Given the current iterate xn, compute

yn= ΠC(J xn−λnAxn).

If xn=yn, Stop. Else, construct the set

Tn :={z ∈ X :⟨J xn−λnAxn−J yn, z−yn⟩ ≤0}

and compute

zn= ΠTn(J xn−λnAyn) and update the next iterate via

xn+1 =J−1nJ u+ (1−αn)J zn). (2.5.6) Step 2: Set n :=n+ 1 and go to Step 1.

Where X is a 2-uniformly convex Banach space and X the dual of X. Under certain conditions, the authors obtained a strong convergence result for the proposed algorithm.

Recently, Tan et al. [240] proposed an inertial subgradient extragradient method with a new Armijo type step-size strategy to solve VIP (1.2.1) in real Hilbert spaces when the underlying operator is pseudomonotone and uniformly continuous. The proposed algorithm is presented as follows:

Algorithm 2.5.7.

Initialization: Given λ1 >0, θ >0, δ > 0, ℓ ∈(0,1) and η∈(0,1). Let x0, x1 ∈ H be arbitrary. Set n := 1.

Iterative Steps: Given the iteratesxn−1 andxnfor eachn≥1,calculatexn+1 as follows;

Step 1: Compute

wn=xnn(xn−xn−1),

where

θn:=

(minn

θ,∥x τn

n−xn−1

o

, if xn ̸=xn−1

θ, otherwise.

(2.5.7) Step 2: Compute

yn=PC(wn−λnAwn).

If wn =yn or Ayn = 0, then Stop and yn is a solution of the VIP. Otherwise, go to Step 3.

Step 3: Compute

zn =PTn(wn−λnAyn) where

Tn:={x∈ H |⟨wn−λnAwn−yn, x−yn⟩ ≤0}, λn :=δℓmn and mn is the smallest nonnegative integer m satisfying

δℓm⟨Ayn−Awn, yn−zn⟩ ≤ η 2

h∥wn−yn2+∥yn−zn2i . Step 4: Calculate

xn+1nf(xn) + (1−αn)zn. Set n :=n+ 1 and go to Step 1.

Under some mild conditions, the authors obtained strong convergence result for the pro- posed algorithm.