• Tidak ada hasil yang ditemukan

Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces

N/A
N/A
Nguyễn Gia Hào

Academic year: 2023

Membagikan "Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces"

Copied!
21
0
0

Teks penuh

(1)

DOI 10.1007/s11075-017-0338-5 ORIGINAL PAPER

Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces

Do Sang Kim1·Bui Van Dinh2

Received: 29 August 2016 / Accepted: 1 May 2017

© Springer Science+Business Media New York 2017

Abstract In this paper, we introduce an extension of multiple set split variational inequality problem (Censor et al. Numer. Algor. 59, 301–323 2012) to multiple set split equilibrium problem (MSSEP) and propose two new parallel extragradient algorithms for solving MSSEP when the equilibrium bifunctions are Lipschitz-type continuous and pseudo-monotone with respect to their solution sets. By using extra- gradient method combining with cutting techniques, we obtain algorithms for these problems without using any product space. Under certain conditions on parame- ters, the iteration sequences generated by the proposed algorithms are proved to be weakly and strongly convergent to a solution of MSSEP. An application to multiple set split variational inequality problems and a numerical example and preliminary computational results are also provided.

Keywords Multiple set split equilibrium problem·Pseudo-monotonicity· Extragradient method·Parallel algorithm·Weak and strong convergence Mathematics Subject Classification (2010) 90C99·68W10·65K10·65K15· 47J25

Do Sang Kim dskim@pknu.ac.kr Bui Van Dinh vandinhb@gmail.com

1 Department of Applied Mathematics, Pukyong National University, Busan, Korea

2 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam

(2)

1 Introduction

In a very interesting paper, Censor et al. [8] introduced the following split variational inequality problem (SVIP):

FindxCsuch thatF (x), yx ≥0,yC andu=AxQsolvesG(u), vu ≥0,vQ,

whereCandQare nonempty, closed and convex subsets in real Hilbert spacesH1

andH2, respectively,A : H1 → H2is a bounded linear operator,F : H1 → H1, G : H2 → H2 are given operators. It was pointed out in [8] that many important problems arising from real-world problems can be formulated as (SVIP) such that:

split minimization problem (SMP), split zeroes problem (SZP), and the split feasi- bility problem (SFP) which had been used for studying signal processing, medical image reconstruction, intensity-modulated radiation therapy, sensor networks, and data compression, see [3,5,6,10,12,13,15] and references quoted therein.

To find a solution of SVIP, Censor et al. [8] proposed to use two methods, the first one is to reformulate SVIP as a constrained variational inequality problem (CVIP) in a product space and solve CVIP when mappingsF andGare monotone and Lips- chitz continuous. The second one is to solve SVIP without using product space when F andGare inverse strongly monotone mappings and satisfying certain additional conditions.

Moudafi [29] (see also He [21]) introduced an extension of SVIP to split equilibrium problem (SEP) which can be stated as follow:

FindxCsuch thatf (x, y) ≥0,yC andu=AxQsolvesg(u, v) ≥0,vQ,

wheref :H1×H1→R, andg:H2×H2→Rare bifunctions such thatf (x, x)= g(u, u)=0,xC, and∀uQ, respectively.

For obtaining a solution of SEP without using product space, He [21] suggested to use the proximal method (see [31,37]) and introduced an iterative method, which generated a sequence{xk}by

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎩

x0C; {ρk} ⊂(0,+∞);μ >0, f (yk, y)+ 1

ρkyyk, ykxk ≥0,yC, g(uk, v)+ 1

ρkvuk, ukAyk ≥0,vQ, xk+1=PC(yk+μA(ukAyk)),k≥0, whereAis the adjoint operator ofA.

Under suitable conditions on parameters, the author showed that{xk}, {yk}con- verge weakly to a solution of SEP provided thatf, gare monotone bifunctions onC andQrespectively. Since then, many solution methods for SEP and related problems whenf is monotone or pseudo-monotone andgis monotone have been proposed, see, for example [16,17,19,25–27,34,38].

(3)

Another extension of SVIP is multiple set split variational inequality problem (MSSVIP), which is formulated as follows (see [8, Section 6.1]):

⎧⎪

⎪⎨

⎪⎪

FindxC:= ∩Ni=1Cisuch thatFi(x), yx ≥0, for all ∈Ci

and for alli=1,2, ..., N, and such that

the pointu=AxQ:=∩Mj=1QjsolvesGj(u), vu≥0, for allvQj

and for allj =1,2, ..., M,

where, as beforeA : H1 → H2 is a bounded linear operator, Fi : H1 → H1, i =1,2, ..., N, andGj : H2 → H2,j =1,2, ..., M, are mappings andCi ⊂ H1, for alli=1,2, ..., N;Qj ⊂H2, for allj =1,2, ..., M are nonempty, closed, and convex subsets, respectively.

To solve MSSVIP, Censor et al. [8, Section 6.1] proposed to reformulate MSSVIP as a SVIP via certain product space and applying their first method to solve SVIP.

The iteration sequence generated by their method was proved to converge weakly to a solution of MSSVIP whenFi,i=1,2, ..., NandGj,j =1,2, ..., M, are inverse strongly monotone mappings and satisfying some additional conditions.

In this paper, motivated by the results mentioned above, we introduce an extension of MSSVIP to multiple set split equilibrium problem (MSSEP) and propose two par- allel extragradient methods for MSSEP. We make use of the extragradient algorithm for solving equilibrium problems in the corresponding spaces to design the weak convergence algorithm, we then combine this algorithm with the hybrid cutting tech- nique (see [9,41]) to get the strong convergence algorithm for MSSEP. By using the extragradient methods, we not only deal with the case MSSEP is pseudo-monotone but also solve MSSEP directly without using any product space.

The rest of paper is organized as follows. The next section presents some preliminary results. Section 3is devoted to introduce MSSEP and parallel extra- gradient algorithms for solving it. The last section presents a numerical example to demonstrate the proposed algorithms.

2 Preliminaries

We assume thatH1 andH2are real Hilbert spaces with an inner product and the associated norm denoted by·,·and · respectively, whereasHrefers to any of these spaces. We denote the strong convergence by ‘→’ and the weak convergence by ‘’ inH. LetCbe a nonempty, closed and convex subset ofH. ByPC, we denote the metric projection operator ontoC, that is

PC(x)C : xPC(x)xy,yC.

The following well known results will be used in the sequel.

Lemma 2.1 Suppose thatCis a nonempty, closed and convex subset inH. ThenPC

has following properties:

(a) PC(x)is singleton and well defined for every x;

(b) z=PC(x)if and only ifxz, yz ≤0,yC;

(4)

(c) yPC(x)2xy2xPC(x)2,yC; (d) PC(x)PC(y)2PC(x)PC(y), xy,x, y ∈H;

(e) PC(x)PC(y)2xy2xPC(x)y+PC(y)2,x, y∈H. Definition 2.1 [4,28,32] Letϕ :H×H→Rbe a bifunction, andCbe a nonempty, closed and convex subset ofH,∅ =SC. Bifunctionϕis said to be:

(a) monotone onCif

ϕ(x, y)+ϕ(y, x)≤0,x, yC; (b) pseudo-monotone onCif

x, yC, ϕ(x, y)≥0=⇒ϕ(y, x)≤0; (c) pseudo-monotone onCwith respect toSif

xS,yC, ϕ(x, y)≥0=⇒ϕ(y, x)≤0;

(d) Lipschitz-type continuous onCif there exist positive constantsL1andL2such that

ϕ(x, y)+ϕ(y, z)ϕ(x, z)L1xy2L2yz2,x, y, zC.

From Definition 2.1, we have the followings

Remark 2.1 (i) it is clear that(a)=⇒(b)=⇒(c),SC.

(ii) Ifϕ(x, y) = (x), yx, for a mapping : H → H. Then the notions of monotonicity of bifunction ϕ collapse to the notions of monotonicity of mapping, respectively. In addition, if mappingisL-Lipschitz onC, i.e., (x)(y)Lxy, ∀x, yC. Then,ϕ is also Lipschitz-type continuous onC (see [28,42]), for example, with constantsL1 = 2L,L2 =

L

2, for any >0.

(iii) Ifϕ1 andϕ2 are Lipschitz-type continuous onC with constantsL11, L12, and L21, L22, respectively. Thenϕ1andϕ2are also Lipschitz-type continuous onC with the same constantsL1, L2, for instance, takeL1 =max{L11, L12},L2 = max{L21, L22}.

Lemma 2.2(Opial’s condition) [33]For any sequence{xk} ⊂Hwithxk x, the inequality

lim inf

k−→+∞xkx< lim inf

k−→+∞xky holds for eachy∈Hwithy=x.

3 Main results

Now, given a bounded linear operatorA:H1→H2, nonempty, closed, and convex subsetsCi ⊂H1,Qj ⊂H2, for alli=1,2, ..., N,j =1,2, ..., M, and bifunctions

(5)

fi:H1×H1 →R, andgj :H2×H2→Rsuch thatfi(x, x) =gj(u, u)=0, for everyxCi,uQj, and for everyi =1,2, ..., N,j =1,2, ..., M, respectively, the multiple set split equilibrium problem (MSSEP) is stated as follows:

⎧⎪

⎪⎨

⎪⎪

FindxC:= ∩Ni=1Cisuch thatfi(x, y)≥0, for allyCi

and for alli=1,2, ..., N, and such that

the pointu=AxQ:= ∩Mj=1Qj solvesgj(u, v)≥0, for allvQj

and for allj =1,2, ..., M.

Since variational inequality problem could be consider as a special case of equi- librium problem [4, 32], MSSVIP is a particular case of MSSEP, for instance, take fi(x, y) = Fi(x), yx, i = 1,2, ..., N, gj(u, v) = Gj(u), vu, j = 1,2, ..., M. Similarly, the split common fixed point problem [11,23,30] and the split common null point problem [7,40] can be formulated as MSSEP. Further more, ifQj = H2 andgj(u, v) = 0,u, v ∈ H2, and for all j = 1,2, ..., M.

Then MSSEP becomes the problem of finding a common element of the set of solutions of equilibrium problems, see, for example [35,39] and references quoted therein.

By Sol(Ci, fi), we denote the solution set of the equilibrium problem EP(Ci, fi), i.e.,

Sol(Ci, fi)= { ¯xCisuch thatfi(x, y)¯ ≥0,yCi},

for alli=1,2, ..., N, and Sol(Qj, gj)stands for the solution set of the equilibrium problem EP(Qj, gj), for allj =1, ..., M.

Now, let K be a nonempty, closed and convex subset ofHandϕ:H×H→Rbe a bifunction such thatϕ(x, x)=0,x ∈ K. In order to find a solution of MSSEP, we make use of the following blanket assumptions:

AssumptionsA

(A1) ϕis pseudo monotone on K with respect to Sol(K, ϕ);

(A2) ϕ(x,·)is convex, subdifferentiable on K, for allx∈K;

(A3) ϕis weakly continuous on K×K in the sense that ifx, y∈K and{xk},{yk} ⊂ K converge weakly to xandy, respectively, thenϕ(xk, yk)ϕ(x, y)as k→ +∞;

(A4) ϕis Lipschitz-type continuous on K with constantsL1>0 andL2>0.

The extragradient algorithm was first proposed by Korpelevich [24] (see also [20]) for finding saddle points and other problems; recently, many authors have succeeded in applying this algorithm for solving equilibrium problems and other related problems [18, 42]. One advantage of the extragradient algorithm for solv- ing equilibrium problems is that it can be apply not only for the pseudo-monotone equilibrium problem cases but also at each iteration, we only have to solve two strongly convex programs. We now present a parallel extragradient algorithm for MSSEP.

(6)

Algorithm 1(Parallel extragradient algorithm for MSSEP)

Initialization. Pick x0C = ∩Ni=1Ci, choose constants 0 < ρ ≤ ¯ρ <

min{2L11,2L1

2}, 0< α≤ ¯α≤1; for eachi=1,2, ..., N,j =1,2, ..., M, choose parameters{ρik},{rkj} ⊂ [ρ, ρ¯];{αki},{βkj} ⊂ [α, α¯],N

i=1αki =M

j=1βkj =1;

μ(0, A1).

Iterationk(k = 0, 1, 2, ...). Havingxkdo the following steps:

Step 1.SolveN strongly convex programs in parallel

⎧⎨

yik=arg min{fi(xk, y)+2ρ1i

kyxk2: yCi} zki =arg min{fi(yik, y)+2ρ1i

kyxk2: yCi}, i=1,2, ..., N.

Step 2.Setz¯k =N

i=1αkizki, and computevˆk =Az¯k.

Step 3.SolveMthe following strongly convex programs in parallel

⎧⎨

vkj =arg min{gj(vˆk, v)+ 1

2rkjv− ˆvk2:vQj} ukj =arg min{gj(vjk, v)+2r1j

k

v− ˆvk2:vQj}, j =1,2, ..., M.

Step 4.Computeu¯k =M j=1βkjukj.

Step 5.Takexk+1 = PC(z¯k+μA(u¯k − ˆvk)), and go toIterationk withkis replaced byk+1.

Remark 3.1 (i) At iterationk, ifyik = xk, thenxk is a solution of EP(Ci, fi).

Similarly, ifvkj = ˆvk, thenvˆkis a solution of EP(Qj, gj).

(ii) At each iterationk, parameters{αki}can be chosen basing on the relative posi- tion ofzki andxk,i =1, ..., N. Similarly, parameters{βkj}, can be chosen by using the distance betweenukj andvˆk,j =1,2, ..., M(see [1,22]).

(iii) We may assume without loss of generality that bifunctionsfi,i=1,2, ..., N andgj,j =1,2, ..., M, satisfying assumptions(A4)with the same Lipschitz- type constantsL1andL2.

Before going to prove the convergence of Algorithm 1, let us recall the following result which was in [2,42]

Lemma 3.1 ([2,42]) Suppose that fi, i = 1,2, ..., N, satisfy assumptions (A1), (A2), (A4) such thatNi=1Sol(Ci, fi)is nonempty. Then, for eachi=1,2, ..., N, we have:

(i) ρki[fi(xk, y)fi(xk, yik)] ≥ yikxk, yiky,yCi.

(ii) zkix2xkx2(1−2ρikL1)xkyik2(1−2ρkiL2)yikzik2,x∈ ∩Ni=1Sol(Ci, fi),k.

We are now in a position to prove the convergence of Algorithm 1.

Theorem 3.1 Let bifunctionsfi,gj satisfy assumptionsA, on Ci andQj, respec- tively, for alli = 1,2, ..., N, j = 1,2, ..., M. Let A : H1 → H2be a bounded

(7)

linear operator with its adjointA. If = {xN

i=1Sol(Ci, fi) : AxM

j=1Sol(Qj, gj)} = ∅, then the sequences{xk},{yik},{zki},i=1,2, ..., Nconverge weakly to an elementxand{vjk},{ukj},j = 1,2, ..., M converge weakly to ApM

j=1Sol(Qj, gj).

Proof Letp , sopN

i=1Sol(Ci, fi)andApM

j=1Sol(Qj, gj). For each i=1,2, ..., Nby Lemma 3.1, we have

zkip2xkp2(1−2ρkiL1)xkyik2(1−2ρkiL2)yikzki2 Combining with step 2, one has

¯zkp2 =

N

i=1

αikzkip2=

N

i=1

αik(zikp)2

=

N

i=1

αikzkip2−1 2

N

i=1 N

j=1

αikαkjzkizkj2

xkp2

n

i=1

αki(1−2ρkiL1)xkyik2

n

i=1

αik(1−2ρkiL2)yikzki2. (3.1) Similarly, assertion (ii) in Lemma 3.1 implies that

ukjAp2Az¯kAp2(12rkjL1)Az¯kvjk2(12rkjL2)vjkukj2,j=1,2, ..., M.

Thus,

¯ukAp2 =

M

j=1

βkj(ukjAp)2

=

M

j=1

βkjukjAp2−1 2

M

i=1 M

j=1

βkiβkjukiukj2

Az¯kAp2

M

j=1

βkj(1−2rkjL1)Az¯kvkj2

M

j=1

βkj(1−2rkjL2)vkjukj2. (3.2) We have

2A(z¯kp),u¯kAz¯k = 2A(z¯kp)+ ¯ukAz¯k(u¯kAz¯k),u¯kAz¯k

= 2 ¯ukAp,u¯kAz¯k −2 ¯ukAz¯k2

= ¯ukAp2+ ¯ukAz¯k2Az¯kAp2

−2 ¯ukAz¯k2

= ¯ukAp2− ¯ukAz¯k2Az¯kAp2. (3.3)

(8)

By definition ofxk+1, we have

xk+1p2= PC(z¯k+μA(u¯kAz¯k))PC(p)2

(z¯kp)+μA(u¯kAz¯k)2

= ¯zkp2+ μA(u¯kAz¯k)2+2μ¯zkp, A(u¯kAz¯k)

≤ ¯zkp2+μ2A2 ¯ukAz¯k2+2μ¯zkp, A(u¯kAz¯k)

= ¯zkp2+μ2A2 ¯ukAz¯k2+2μA(z¯kp), (u¯kAz¯k). Using (3.3), we get

xk+1p2 ≤ ¯zkp2+μ2A2 ¯ukAz¯k2 +μ

¯ukAp2− ¯ukAz¯k2Az¯kAp2

= ¯zkp2μ(1−μA2) ¯ukAz¯k2 +μ

¯ukAp2Az¯kAp2

. (3.4)

In combination with (3.1) and (3.2), (3.4) becomes xk+1p2xkp2

N

i=1

αki

(1−2ρkiL1)xkyik2(1−2ρkiL2)yikzki2

μ(1−μA2) ¯ukAz¯k2

μ

M

j=1

βkj(1−2rkjL1)Az¯kvjk2(1−2rkjL2)vjkukj2 . (3.5)

Since 0 < μ < 1

A2,{ρki},{rkj} ⊂ [ρ, ρ¯] ⊂ (0, min{2L11,2L1

1}),{αki},{βkj} ⊂ [α, α¯] ⊂ (0, 1]. Equation (3.5) implies that {xkp2}is a nonincreasingly sequence.

So

klim→∞xkp2=a(p). (3.6) From (3.5) and (3.6), we get

k→+∞lim xkyik = lim

k→+∞yikzki =0,i=1,2, ..., N. (3.7)

k→+∞lim Az¯kvkj = lim

k→+∞vkjukj =0,j =1,2, ..., M. (3.8) Because lim

k→+∞xkp =a(p),{xk}is bounded, there exists a subsequence{xkm} of{xk}such thatxkm converges weakly to somexas m→ +∞. Remember that xkC = ∩Ni=1Ci,kandCi are closed and convex sets for alli = 1,2, ..., N.

We have thatC is also closed and convex, soCis weakly closed and thereforexC, i.e.,xCi for alli = 1,2, ..., N. From (3.7), we also get that{yikm},{zkim} converge weakly tox, for alli=1,2, ..., N. Hence,{¯zkm}converges weakly tox, consequently{Az¯km}converges weakly toAx. Together with (3.8), we obtain that {vjkm},{ukjm}converge weakly toAx for allj = 1,2, ..., M as m → +∞. Since {vjk} ⊂Qj,Qj is closed and convex, soAxQj,j.

(9)

For eachi =1,2, ..., N andj =1,2, ..., M, assertion (i) in Lemma 3.1 implies that

ρki

m[fi(xkm, y)fi(xkm, ykim)] ≥ ykimxkm, yikmy,yCi, and rkjm[gj(Az¯km, v)gj(Az¯km, vkjm)] ≥ vjkmAz¯km, vjkmv,vQj. Since

yikmxkm, ykimy ≥ −yikmxkmyikmyand vjkmAz¯km, vjkmv ≥ −vkjmAz¯kmvjkmv, it follows from the last inequalities that

ρkim

fi(xkm, y)fi(xkm, yikm)

≥ −yikmxkmyikmy, and rkj

m

gj(Az¯km, v)gj(Az¯km, vkjm)

≥ −vkjmAz¯kmvjkmv. Hence,

fi(xkm, y)fi(xkm, yikm)+ 1 ρik

m

yikmxkmyikmy ≥0, and gj(Az¯km, v)gj(Az¯km, vjkm)+ 1

rkj

m

vkjmAz¯kmvjkmv ≥0.

Lettingm→ +∞, in combination with (3.7), (3.8) and the continuity offi,gj, yield fi(x, y)fi(x, x)≥0,yCi,i=1,2, ..., N, and

gj(Ax, v)gj(Ax, Ax)≥0,vQj,j =1,2, ..., M,

which means thatx∈Sol(Ci, fi)for alli=1,2, ..., NandAx∈Sol(Qj, gj)for allj =1,2, ..., M, orx .

Finally, we prove{xk}converges weakly tox. Indeed, if there exists a subse- quence{xkn}of{xk}such that xkn x˜ withx˜ = x, then we havex˜ ∈ . By Opial’s condition, yields

lim inf

m→+∞xkmx<lim inf

m→+∞xkm− ˜x

=lim inf

n→+∞xkn− ˜x

<lim inf

n→+∞xknx

=lim inf

m→+∞xkmx. This is a contradiction. Hence,{xk}converges weakly tox.

Together with (3.7), we also getyik xandzki x, for alli =1,2, ..., N.

Therefore,z¯k x, andAz¯k Ax. Combining with (3.8), it is immediate that {vjk},{ukj}converge weakly toAxfor allj =1,2, ..., M.

WhenN =M=1, thenC1=CandQ1=Q, we get the following corollary.

(10)

Corollary 3.1 Suppose thatf,gare bifunctions satisfying assumptionsAonCand Qrespectively, suppose further thatA : H1 → H2 is a bounded linear operator with its adjointA. Takex0C; {ρk}, {rk} ⊂ [ρ, ρ¯], for some ρ,ρ¯ such that 0< ρ ≤ ¯ρ < min{2L11,2L1

2};0 < μ < 1

A2, and consider the sequences{xk}, {yk},{zk}, and{vk},{uk}defined by

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

yk=arg min

ρkf (xk, y)+1

2yxk2:yC

, zk =arg min

ρkf (yk, y)+1

2yxk2:yC , vk =arg min

rkg(Azk, v)+1

2vAzk2:vQ , uk=arg min

rkg(Avk, y)+1

2vAzk2:vQ

, xk+1=PC(zk+μA(ukAzk)).

If = {xSol(C, f ) : AxSol(Q, g)} = ∅, then the sequences{xk},{yk} and{zk}converge weakly to an elementx , and{vk},{uk}converge weakly to AxSol(Q, g).

An important case of MSSEP is multiple set split variational inequality problem MSSVIP. In this case, we have the following corollary which not only solves MSSVIP without using any product space but also deals with the pseudo-monotone cases.

Corollary 3.2 Suppose thatFi,Gj are mappings satisfying assumptionsAonCi

andQj, respectively, for alli=1,2, ..., N,j =1,2, ..., M, andA:H1→H2be a bounded linear operator with its adjointA. Takex0C = ∩Ni=1Ci;{ρki}, {rkj} ⊂ [ρ, ρ¯], for someρ,ρ¯such that0< ρ≤ ¯ρ <min{2L11,2L1

2};{αki},{βkj} ⊂ [α, α¯] ⊂ (0,1], for alli = 1,2, ..., N, j = 1,2, ..., M, andN

i=1αik = M

j=1βkj = 1;

0< μ < 1

A2. Consider the sequences{xk},{yik},{zki},i =1,2, ..., Nand{vjk}, {ukj},j =1,2, ..., Mdefined by

⎧⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

⎪⎪

yik=PCi(xkρkiFi(xk))

zki =PCi(xkρkiFi(yik)), i=1,2, ..., N.

¯

zk =N i=1αikzki, ˆ

vk =Az¯k,

vkj =PQj(vˆkrkjGj(vˆk))

ukj =PQj(vˆkrkjGj(vjk)), j =1,2, ..., M.

¯

uk =M j=1βkjukj,

xk+1=PC(z¯k+μA(u¯k− ˆvk)).

(11)

If = {xN

i=1Sol(Ci, Fi) : AxM

j=1Sol(Qj, Gj)} = ∅, then the sequences{xk},{yik},{zki},i =1,2, ..., Nconverge weakly to an elementx , and{vkj},{ukj},j =1,2, ..., Mconverge weakly toAx∈ ∩Mj=1Sol(Qj, Gj).

In some cases, we wish to get the strong convergence algorithms to solve our problems. To do so, we combine Algorithm 1 with hybrid method [9,41] to propose the following parallel hybrid extragradient algorithm for MSSEP.

Algorithm 2(Parallel Hybrid Extragradient Algorithm for MSSEP) Initialization.Pickx0=xgC= ∩Ni=1Ci, choose constants 0< ρ≤ ¯ρ <min{2L11,2L1

2}, 0< α≤ ¯α≤1; for eachi=1,2, ..., N, j =1,2, ..., M, choose parameters{ρki},{rkj} ⊂ [ρ, ρ¯];

{αki},{βkj} ⊂ [α, α¯],N

i=1αki =M

j=1βkj =1;μ(0, A1);B0=C. Iterationk(k = 0, 1, 2, ...). Havingxkdo the following steps:

Step 1.SolveN strongly convex programs in parallel

⎧⎨

yik=arg min{fi(xk, y)+2ρ1i

kyxk2: yCi} zki =arg min{fi(yik, y)+2ρ1i

kyxk2: yCi}, i=1,2, ..., N.

Step 2.Setz¯k =N

i=1αkizki, and computevˆk =Az¯k.

Step 3.SolveMthe following strongly convex programs in parallel

⎧⎨

vkj =arg min{gj(vˆk, v)+ 1

2rkjv− ˆvk2: vQj} ukj =arg min{gj(vjk, v)+ 1

2rkjv− ˆvk2: vQj},j =1,2, ..., M.

Step 4.Computeu¯k =M j=1βkjukj. Step 5.Taketk =PC(z¯k+μA(u¯k− ˆvk)).

Step 6.DefineBk+1 = {xBk : xtkx− ¯zkxxk}, computexk+1=PBk+1(xg), and go toIterationkwithkis replaced by k+1.

Remark 3.2 By settingHk1= {x∈H1: xtkx− ¯zk}, we have Hk1= {x∈H1: ¯zktk, x ≤ 1

2(¯zk2tk2)}, henceHk1is a halfspace. Similarly,

Hk2 = {x∈H1: x− ¯zkxxk}

= {x∈H1: xk− ¯zk, x ≤ 1

2(xk2zk2)},

is also a halfspace. SinceBk+1 =BkHk1Hk2, ifH1is the Euclidean spaceRn andB0is a polyhedron, then by induction we get thatBk are polyhedra for allk.

Referensi

Dokumen terkait

Noise C = Slight L = Rare Low Risk Y Operation under time limited operations Operation of crushing plant Dust Air / windborne pathway causing impacts to amenity

Out-of-hours construction For construction work outside the above ‘daytime’ hours, such as weekday evenings, Sunday and public holidays, the occupier of the construction site must: •