• Tidak ada hasil yang ditemukan

4.3 On split variational inequality problem beyond monotonicity

4.3.1 Proposed methods

In this section, we present our proposed methods and discuss their features. We begin with the following assumptions under which our strong convergence results are obtained.

Assumption 4.3.1. Suppose that the following conditions hold:

(a) The feasible sets C and Qare nonempty closed and convex subsets of the real Hilbert spaces H1 and H2, respectively.

(b) A:H1 → H1 andF :H2 → H2 are pseudomonotone, sequentially weakly continuous and Lipschitz continuous with Lipschitz constants L1 and L2, respectively.

(c) T :H1 → H2 is a bounded linear operator and the solution set Γ :={z∈V I(A,C) : T z ∈V I(F,Q)}is nonempty, where V I(A,C)is the solution set of the classical VIP (1.2.4).

(d) {δn}n=1 and {τn}n=1 are positive sequences satisfying the following conditions:

δn∈(0,1), lim

n→∞δn= 0,

P

n=1

δn=∞ and lim

n→∞

τn

δn = 0.

(e) {θn} ⊂(a,1−δn) for somea >0.

We present the following method for solving the SVIP (1.2.4)-(1.2.5) when L1 and L2 are known.

Algorithm 4.3.2. Modified projection and contraction method with fixed stepsize.

Step 0: Choose sequences {δn}n=1,{θn}n=1 and {τn}n=1 such that the conditions from Assumption 4.3.1 (d)-(e) hold and let η ≥ 0, γi ∈ (0,2), i = 1,2, µ ∈ (0,L1

1), λ ∈ (0,L1

2), α≥3 and x0, x1 ∈ H1 be given arbitrarily. Setn := 1.

Step 1: Given the iteratesxn−1 andxn (n ≥1),chooseαnsuch that 0≤αn≤α¯n,where

¯ αn:=

(minn

n−1

n+α−1,∥x τn

n−xn−1

o

, if xn ̸=xn−1 n−1

n+α−1, otherwise.

(4.3.1) Step 2: Compute

wn =xnn(xn−xn−1), yn=PQ(T wn−λF T wn),

zn=T wn−γ2βnrn, wherern:=T wn−yn−λ(F T wn−F yn) andβn := ⟨T w∥rn−yn,rn

n2 ,ifrn̸= 0; otherwise,βn = 0.

Step 3: Compute

bn=wnnT(zn−T wn), where the stepsize ηn is chosen such that for small enough ϵ >0,

ηn

ϵ, ∥T wn−zn2

∥T(T wn−zn)∥2 −ϵ

,

if zn̸=T wn; otherwise, ηn=η.

Step 4: Compute

un=PC(bn−µAbn), tn=bn−γ1γnvn, where vn :=bn−un−µ(Abn−Aun) andγn:= ⟨bn∥v−un,vn

n2 , if vn̸= 0; otherwise, γn = 0.

Step 5: Compute

xn+1 = (1−θn−δn)bnntn. Set n :=n+ 1 and go back to Step 1.

In the situation where L1 and L2 are not available, we present the following method with adaptive stepsize for solving the SVIP (1.2.4)-(1.2.5).

Algorithm 4.3.3. Modified projection and contraction method with adaptive stepsize strategy.

Step 0: Choose sequences {δn}n=1,{θn}n=1 and {τn}n=1 such that the conditions from Assumption 4.3.1 (d)-(e) hold and let η ≥ 0, γi ∈ (0,2), ai ∈ (0,1), i = 1,2, λ1 > 0, µ1 >0,α≥3 and x0, x1 ∈ H1 be given arbitrarily. Set n:= 1.

Step 1: Given the iteratesxn−1 and xn for eachn≥1,chooseαnsuch that 0 ≤αn≤α¯n, where

¯ αn:=

(minn

n−1

n+α−1,∥x τn

n−xn−1

o

, if xn ̸=xn−1

n−1

n+α−1, otherwise.

(4.3.2) Step 2: Compute

wn =xnn(xn−xn−1), yn =PQ(T wn−λnF T wn),

zn=T wn−γ2βnrn, wherern:=T wn−yn−λn(F T wn−F yn),βn:= ⟨T w∥rn−yn,rn

n2 ,ifrn̸= 0; otherwise,βn= 0; and λn+1 =

(min

na2||T wn−yn||

||F T wn−F yn||, λn

o

, if F T wn̸=F yn

λn, otherwise. (4.3.3)

Step 3: Compute

bn=wnnT(zn−T wn), where the stepsize ηn is chosen such that for small enough ϵ >0,

ηn

ϵ, ∥T wn−zn2

∥T(T wn−zn)∥2 −ϵ

, if zn̸=T wn; otherwise, ηn=η.

Step 4: Compute

un=PC(bn−µnAbn),

tn=bn−γ1γnvn, where vn :=bn−un−µn(Abn−Aun), γn= ⟨bn∥v−un,vn

n2 ,if vn̸= 0; otherwise, γn= 0; and µn+1 =

(minna

1||bn−un||

||Aun−Abn||, µno

, if Abn ̸=Aun

µn, otherwise.

(4.3.4) Step 5: Compute

xn+1 = (1−θn−δn)bnntn. Set n :=n+ 1 and go back to Step 1.

We now highlight some of the features of our proposed methods.

Remark 4.3.1.

ˆ Observe that Algorithm 4.3.2 and Algorithm 4.3.3 can be viewed as modified pro- jection and contraction methods involving one projection onto C per iteration for solving the classical VIP in H1 and another projection and contraction methods in- volving one projection onto Q per iteration under a bounded linear operator T for solving another VIP in another spaceH2, with no extra projections onto half-spaces or feasible sets unlike the method in [206] (see Appendix4.3.16), where extra projec- tions onto half-spaces are required. In fact, an interesting feature of the projection and contraction methods used here is thatrnof Step 2 in Algorithms4.3.2and 4.3.3 can be described as weighted average of (T wn−yn ∼ λF T wn) and a hypothetical (Tw˜n −y˜n ∼ λF Tw˜n) in H2, where Tw˜n = T wn−λF T wn and ˜yn = yn−λF yn. We also have similar description for vn of Step 4. This looks very similar to the Heun’s method or modified Euler method from numerical methods for solving ordi- nary differential equations (see [230, page 328] for details). Furthermore, we can see that

βn||rn||2 =⟨T wn−yn, rn⟩, ∀n≥1 (4.3.5) holds for both rn = 0 andrn̸= 0. Similarly, we have that

γn||vn||2 =⟨bn−un, vn⟩, ∀n≥1 (4.3.6) holds for both vn= 0 and vn ̸= 0.

ˆ Another notable advantage of Algorithm 4.3.2 and Algorithm 4.3.3 for solving the SVIP (1.2.4)-(1.2.5) is that the monotonicity assumption on the operatorsA and F usually used in many other works (see for example, [61, 63,121,131,145, 190, 213, 243, 244]) to guarantee convergence, is dispensed with and no extra projections are required under this setting unlike in [206].

ˆ The stepsizes {λn} and{µn} given by (4.3.3) and (4.3.4), respectively are generated at each iteration by some simple computations. Thus, Algorithm 4.3.3 is easily implemented without the prior knowledge of the Lipschitz constants L1 and L2.

ˆ Step 1 of our methods is also easily implemented since the value of ||xn−xn−1|| is a priori known before choosing αn. In our numerical analysis (Section4.3.3), we shall check the sensitivity ofα in order to find numerically, the optimum choice ofα with respect to the convergence speed of our proposed methods.

ˆ Step 5 of both algorithms guarantee the strong convergence to a minimum-norm solution of the problem.

ˆ Unlike in [61, 63,121], we can see that Algorithm4.3.2 and Algorithm 4.3.3 do not require any product space formulation, thereby avoiding any potential difficulties that might be caused by the product space.

Remark 4.3.2. The choice of the stepsize ηn in Step 3 of Algorithms4.3.2 and 4.3.3 do not require the prior knowledge of the operator norm ∥T∥. Furthermore, the value of η does not influence the algorithms, but it was introduced for the sake of clarity. We show in the following lemma thatηn is well-defined (see also [275, Remark 2.3, Lemma 2.3] and [274, Lemma 3.3]).

Lemma 4.3.3. The stepsize ηn given in Step 3 of Algorithms 4.3.2 and 4.3.3 is well- defined.

Proof. Let z ∈ Γ. That is, z ∈ V I(A,C) and T z ∈ V I(F,Q). Then, by Cauchy-Schwarz inequality and Lemma 2.1.1, we obtain

∥T(T wn−zn)∥∥wn−z∥ ≥ ⟨T(T wn−zn), wn−z⟩

=⟨T wn−zn, T wn−T z⟩

= 1 2 h

∥T wn−zn2+∥T wn−T z∥2− ∥zn−T z∥2i

. (4.3.7) Since zn =T wn−γ2βnrn, we obtain that

∥zn−T z∥2 =∥T wn−T z∥222βn2∥rn2−2γ2βn⟨T wn−T z, rn

=∥T wn−T z∥222βn2∥rn2−2γ2βn⟨T wn−yn, rn⟩ −2γ2βn⟨yn−T z, rn⟩.

(4.3.8) Since yn =PQ(T wn−λF T wn) and T z ∈ Q, we obtain from the characteristics property of PQ that

⟨T wn−λF T wn−yn, yn−T z⟩ ≥0. (4.3.9) Also, since T z∈V I(F,Q) and yn ∈ Q, we have that

⟨F T z, yn−T z⟩ ≥0 (see Inequality (1.2.1)), which by the pseudomonotonicity of F and λ >0,implies

⟨λF yn, yn−T z⟩ ≥0. (4.3.10)

Adding (4.3.9) and (4.3.10), we obtain

⟨T wn−yn−λ(F T wn−F yn), yn−T z⟩ ≥0.

That is

⟨rn, yn−T z⟩ ≥0 (which is still true if λ is replaced with λn as in Algorithm 4.3.3).

(4.3.11) On the other hand, we have from the Lipschitz continuity of F and λ∈(0,L1

2), that

⟨T wn−yn, rn⟩=⟨T wn−yn, T wn−yn−λ(F T wn−F yn)⟩

=∥T wn−yn2− ⟨T wn−yn, λ(F T wn−F yn)⟩

≥ ∥T wn−yn2−λ∥T wn−yn∥∥F T wn−F yn

≥(1−λL2)∥T wn−yn2 ≥0, (4.3.12) which by the definition of βn, implies that βn ≥ 0, ∀n ≥ 1 (this is still true even if we replace λ with λn as in Algorithm 4.3.3). Thus, using (4.3.11) and (4.3.5) in (4.3.8), and nothing that γ2 ∈(0,2), we obtain

∥zn−T z∥2 ≤ ∥T wn−T z∥222βn2∥rn2 −2γ2βn⟨T wn−yn, rn

=∥T wn−T z∥222βn2∥rn2−2γ2βn·βn∥rn2

=∥T wn−T z∥2−γ2(2−γ2n2∥rn2

≤ ∥T wn−T z∥2. (4.3.13)

Substituting (4.3.13) into (4.3.7), we obtain that

∥T(T wn−zn)∥∥wn−z∥ ≥ 1

2∥T wn−zn2. (4.3.14) Now, for zn̸=T wn,we have that∥T wn−zn∥>0.This together with (4.3.14), imply that

∥T(T wn−zn)∥∥wn−z∥>0. Hence, we have that ∥T(T wn−zn)∥ ̸= 0. Therefore,ηn is well-defined.

We also make the following observation regardingηn.Note from Step 3 of Algorithms4.3.2 and 4.3.3, that

ηn∥T(T wn−zn)∥2 ≤ ∥T wn−zn2−ϵ∥T(T wn−zn)∥2, (4.3.15) which implies that

ηn2∥T(T wn−zn)∥2−ηn∥T wn−zn2 ≤ −ϵηn∥T(T wn−zn)∥2. (4.3.16) Note also that, we can replace the choice ofηnin Step 3 of Algorithm4.3.2and Algorithm 4.3.3 with the following: For small enough ϵ >0,

ηn∈h

ϵ, ∥T wn−zn2

1 +∥T(T wn−zn)∥2 −ϵ i

. (4.3.17)

Then, we can see that

ηn ≤ ∥T wn−zn2−ϵ 1 +∥T(T wn−zn)∥2 1 +∥T(T wn−zn)∥2 , which implies that

ηnn∥T(T wn−zn)∥2 ≤ ∥T wn−zn2−ϵ−ϵ∥T(T wn−zn)∥2. This further gives (4.3.15) and consequently (4.3.16).

As we shall see in our convergence analysis, (4.3.15) and (4.3.16) play crucial role in our proofs. Therefore, one can either choose the stepsize in Step 3 or that in (4.3.17), to ensure the convergence of Algorithms 4.3.2 and 4.3.3.