DOI 10.1007/s11075-017-0338-5 ORIGINAL PAPER
Parallel extragradient algorithms for multiple set split equilibrium problems in Hilbert spaces
Do Sang Kim1·Bui Van Dinh2
Received: 29 August 2016 / Accepted: 1 May 2017
© Springer Science+Business Media New York 2017
Abstract In this paper, we introduce an extension of multiple set split variational inequality problem (Censor et al. Numer. Algor. 59, 301–323 2012) to multiple set split equilibrium problem (MSSEP) and propose two new parallel extragradient algorithms for solving MSSEP when the equilibrium bifunctions are Lipschitz-type continuous and pseudo-monotone with respect to their solution sets. By using extra- gradient method combining with cutting techniques, we obtain algorithms for these problems without using any product space. Under certain conditions on parame- ters, the iteration sequences generated by the proposed algorithms are proved to be weakly and strongly convergent to a solution of MSSEP. An application to multiple set split variational inequality problems and a numerical example and preliminary computational results are also provided.
Keywords Multiple set split equilibrium problem·Pseudo-monotonicity· Extragradient method·Parallel algorithm·Weak and strong convergence Mathematics Subject Classification (2010) 90C99·68W10·65K10·65K15· 47J25
Do Sang Kim dskim@pknu.ac.kr Bui Van Dinh vandinhb@gmail.com
1 Department of Applied Mathematics, Pukyong National University, Busan, Korea
2 Faculty of Information Technology, Le Quy Don Technical University, Hanoi, Vietnam
1 Introduction
In a very interesting paper, Censor et al. [8] introduced the following split variational inequality problem (SVIP):
Findx∗∈Csuch thatF (x∗), y−x∗ ≥0,∀y∈C andu∗=Ax∗∈QsolvesG(u∗), v−u∗ ≥0,∀v∈Q,
whereCandQare nonempty, closed and convex subsets in real Hilbert spacesH1
andH2, respectively,A : H1 → H2is a bounded linear operator,F : H1 → H1, G : H2 → H2 are given operators. It was pointed out in [8] that many important problems arising from real-world problems can be formulated as (SVIP) such that:
split minimization problem (SMP), split zeroes problem (SZP), and the split feasi- bility problem (SFP) which had been used for studying signal processing, medical image reconstruction, intensity-modulated radiation therapy, sensor networks, and data compression, see [3,5,6,10,12,13,15] and references quoted therein.
To find a solution of SVIP, Censor et al. [8] proposed to use two methods, the first one is to reformulate SVIP as a constrained variational inequality problem (CVIP) in a product space and solve CVIP when mappingsF andGare monotone and Lips- chitz continuous. The second one is to solve SVIP without using product space when F andGare inverse strongly monotone mappings and satisfying certain additional conditions.
Moudafi [29] (see also He [21]) introduced an extension of SVIP to split equilibrium problem (SEP) which can be stated as follow:
Findx∗∈Csuch thatf (x∗, y) ≥0,∀y∈C andu∗=Ax∗∈Qsolvesg(u∗, v) ≥0,∀v∈Q,
wheref :H1×H1→R, andg:H2×H2→Rare bifunctions such thatf (x, x)= g(u, u)=0,∀x∈C, and∀u∈Q, respectively.
For obtaining a solution of SEP without using product space, He [21] suggested to use the proximal method (see [31,37]) and introduced an iterative method, which generated a sequence{xk}by
⎧⎪
⎪⎪
⎪⎪
⎨
⎪⎪
⎪⎪
⎪⎩
x0∈C; {ρk} ⊂(0,+∞);μ >0, f (yk, y)+ 1
ρky−yk, yk−xk ≥0, ∀y∈C, g(uk, v)+ 1
ρkv−uk, uk−Ayk ≥0, ∀v ∈Q, xk+1=PC(yk+μA∗(uk−Ayk)), ∀k≥0, whereA∗is the adjoint operator ofA.
Under suitable conditions on parameters, the author showed that{xk}, {yk}con- verge weakly to a solution of SEP provided thatf, gare monotone bifunctions onC andQrespectively. Since then, many solution methods for SEP and related problems whenf is monotone or pseudo-monotone andgis monotone have been proposed, see, for example [16,17,19,25–27,34,38].
Another extension of SVIP is multiple set split variational inequality problem (MSSVIP), which is formulated as follows (see [8, Section 6.1]):
⎧⎪
⎪⎨
⎪⎪
⎩
Findx∗∈C:= ∩Ni=1Cisuch thatFi(x∗), y−x∗ ≥0, for all ∈Ci
and for alli=1,2, ..., N, and such that
the pointu∗=Ax∗∈Q:=∩Mj=1QjsolvesGj(u∗), v−u∗≥0, for allv∈Qj
and for allj =1,2, ..., M,
where, as beforeA : H1 → H2 is a bounded linear operator, Fi : H1 → H1, i =1,2, ..., N, andGj : H2 → H2,j =1,2, ..., M, are mappings andCi ⊂ H1, for alli=1,2, ..., N;Qj ⊂H2, for allj =1,2, ..., M are nonempty, closed, and convex subsets, respectively.
To solve MSSVIP, Censor et al. [8, Section 6.1] proposed to reformulate MSSVIP as a SVIP via certain product space and applying their first method to solve SVIP.
The iteration sequence generated by their method was proved to converge weakly to a solution of MSSVIP whenFi,i=1,2, ..., NandGj,j =1,2, ..., M, are inverse strongly monotone mappings and satisfying some additional conditions.
In this paper, motivated by the results mentioned above, we introduce an extension of MSSVIP to multiple set split equilibrium problem (MSSEP) and propose two par- allel extragradient methods for MSSEP. We make use of the extragradient algorithm for solving equilibrium problems in the corresponding spaces to design the weak convergence algorithm, we then combine this algorithm with the hybrid cutting tech- nique (see [9,41]) to get the strong convergence algorithm for MSSEP. By using the extragradient methods, we not only deal with the case MSSEP is pseudo-monotone but also solve MSSEP directly without using any product space.
The rest of paper is organized as follows. The next section presents some preliminary results. Section 3is devoted to introduce MSSEP and parallel extra- gradient algorithms for solving it. The last section presents a numerical example to demonstrate the proposed algorithms.
2 Preliminaries
We assume thatH1 andH2are real Hilbert spaces with an inner product and the associated norm denoted by·,·and · respectively, whereasHrefers to any of these spaces. We denote the strong convergence by ‘→’ and the weak convergence by ‘’ inH. LetCbe a nonempty, closed and convex subset ofH. ByPC, we denote the metric projection operator ontoC, that is
PC(x)∈C : x−PC(x) ≤ x−y, ∀y∈C.
The following well known results will be used in the sequel.
Lemma 2.1 Suppose thatCis a nonempty, closed and convex subset inH. ThenPC
has following properties:
(a) PC(x)is singleton and well defined for every x;
(b) z=PC(x)if and only ifx−z, y−z ≤0, ∀y∈C;
(c) y−PC(x)2≤ x−y2− x−PC(x)2, ∀y∈C; (d) PC(x)−PC(y)2≤ PC(x)−PC(y), x−y, ∀x, y ∈H;
(e) PC(x)−PC(y)2≤ x−y2− x−PC(x)−y+PC(y)2, ∀x, y∈H. Definition 2.1 [4,28,32] Letϕ :H×H→Rbe a bifunction, andCbe a nonempty, closed and convex subset ofH,∅ =S⊂C. Bifunctionϕis said to be:
(a) monotone onCif
ϕ(x, y)+ϕ(y, x)≤0, ∀x, y∈C; (b) pseudo-monotone onCif
∀x, y∈C, ϕ(x, y)≥0=⇒ϕ(y, x)≤0; (c) pseudo-monotone onCwith respect toSif
∀x∗∈S, ∀y∈C, ϕ(x∗, y)≥0=⇒ϕ(y, x∗)≤0;
(d) Lipschitz-type continuous onCif there exist positive constantsL1andL2such that
ϕ(x, y)+ϕ(y, z)≥ϕ(x, z)−L1x−y2−L2y−z2, ∀x, y, z∈C.
From Definition 2.1, we have the followings
Remark 2.1 (i) it is clear that(a)=⇒(b)=⇒(c),∀S⊂C.
(ii) Ifϕ(x, y) = (x), y−x, for a mapping : H → H. Then the notions of monotonicity of bifunction ϕ collapse to the notions of monotonicity of mapping, respectively. In addition, if mappingisL-Lipschitz onC, i.e., (x)−(y) ≤ Lx−y, ∀x, y ∈ C. Then,ϕ is also Lipschitz-type continuous onC (see [28,42]), for example, with constantsL1 = 2L,L2 =
L
2, for any >0.
(iii) Ifϕ1 andϕ2 are Lipschitz-type continuous onC with constantsL11, L12, and L21, L22, respectively. Thenϕ1andϕ2are also Lipschitz-type continuous onC with the same constantsL1, L2, for instance, takeL1 =max{L11, L12},L2 = max{L21, L22}.
Lemma 2.2(Opial’s condition) [33]For any sequence{xk} ⊂Hwithxk x, the inequality
lim inf
k−→+∞xk−x< lim inf
k−→+∞xk−y holds for eachy∈Hwithy=x.
3 Main results
Now, given a bounded linear operatorA:H1→H2, nonempty, closed, and convex subsetsCi ⊂H1,Qj ⊂H2, for alli=1,2, ..., N,j =1,2, ..., M, and bifunctions
fi:H1×H1 →R, andgj :H2×H2→Rsuch thatfi(x, x) =gj(u, u)=0, for everyx∈ Ci,u ∈ Qj, and for everyi =1,2, ..., N,j =1,2, ..., M, respectively, the multiple set split equilibrium problem (MSSEP) is stated as follows:
⎧⎪
⎪⎨
⎪⎪
⎩
Findx∗∈C:= ∩Ni=1Cisuch thatfi(x∗, y)≥0, for ally∈Ci
and for alli=1,2, ..., N, and such that
the pointu∗=Ax∗∈Q:= ∩Mj=1Qj solvesgj(u∗, v)≥0, for allv ∈Qj
and for allj =1,2, ..., M.
Since variational inequality problem could be consider as a special case of equi- librium problem [4, 32], MSSVIP is a particular case of MSSEP, for instance, take fi(x, y) = Fi(x), y − x, i = 1,2, ..., N, gj(u, v) = Gj(u), v −u, j = 1,2, ..., M. Similarly, the split common fixed point problem [11,23,30] and the split common null point problem [7,40] can be formulated as MSSEP. Further more, ifQj = H2 andgj(u, v) = 0, ∀u, v ∈ H2, and for all j = 1,2, ..., M.
Then MSSEP becomes the problem of finding a common element of the set of solutions of equilibrium problems, see, for example [35,39] and references quoted therein.
By Sol(Ci, fi), we denote the solution set of the equilibrium problem EP(Ci, fi), i.e.,
Sol(Ci, fi)= { ¯x∈Cisuch thatfi(x, y)¯ ≥0, ∀y∈Ci},
for alli=1,2, ..., N, and Sol(Qj, gj)stands for the solution set of the equilibrium problem EP(Qj, gj), for allj =1, ..., M.
Now, let K be a nonempty, closed and convex subset ofHandϕ:H×H→Rbe a bifunction such thatϕ(x, x)=0, ∀x ∈ K. In order to find a solution of MSSEP, we make use of the following blanket assumptions:
AssumptionsA
(A1) ϕis pseudo monotone on K with respect to Sol(K, ϕ);
(A2) ϕ(x,·)is convex, subdifferentiable on K, for allx∈K;
(A3) ϕis weakly continuous on K×K in the sense that ifx, y∈K and{xk},{yk} ⊂ K converge weakly to xandy, respectively, thenϕ(xk, yk) → ϕ(x, y)as k→ +∞;
(A4) ϕis Lipschitz-type continuous on K with constantsL1>0 andL2>0.
The extragradient algorithm was first proposed by Korpelevich [24] (see also [20]) for finding saddle points and other problems; recently, many authors have succeeded in applying this algorithm for solving equilibrium problems and other related problems [18, 42]. One advantage of the extragradient algorithm for solv- ing equilibrium problems is that it can be apply not only for the pseudo-monotone equilibrium problem cases but also at each iteration, we only have to solve two strongly convex programs. We now present a parallel extragradient algorithm for MSSEP.
Algorithm 1(Parallel extragradient algorithm for MSSEP)
Initialization. Pick x0 ∈ C = ∩Ni=1Ci, choose constants 0 < ρ ≤ ¯ρ <
min{2L11,2L1
2}, 0< α≤ ¯α≤1; for eachi=1,2, ..., N,j =1,2, ..., M, choose parameters{ρik},{rkj} ⊂ [ρ, ρ¯];{αki},{βkj} ⊂ [α, α¯],N
i=1αki =M
j=1βkj =1;
μ∈(0, A1).
Iterationk(k = 0, 1, 2, ...). Havingxkdo the following steps:
Step 1.SolveN strongly convex programs in parallel
⎧⎨
⎩
yik=arg min{fi(xk, y)+2ρ1i
ky−xk2: y∈Ci} zki =arg min{fi(yik, y)+2ρ1i
ky−xk2: y∈Ci}, i=1,2, ..., N.
Step 2.Setz¯k =N
i=1αkizki, and computevˆk =Az¯k.
Step 3.SolveMthe following strongly convex programs in parallel
⎧⎨
⎩
vkj =arg min{gj(vˆk, v)+ 1
2rkjv− ˆvk2:v∈Qj} ukj =arg min{gj(vjk, v)+2r1j
k
v− ˆvk2:v∈Qj}, j =1,2, ..., M.
Step 4.Computeu¯k =M j=1βkjukj.
Step 5.Takexk+1 = PC(z¯k+μA∗(u¯k − ˆvk)), and go toIterationk withkis replaced byk+1.
Remark 3.1 (i) At iterationk, ifyik = xk, thenxk is a solution of EP(Ci, fi).
Similarly, ifvkj = ˆvk, thenvˆkis a solution of EP(Qj, gj).
(ii) At each iterationk, parameters{αki}can be chosen basing on the relative posi- tion ofzki andxk,i =1, ..., N. Similarly, parameters{βkj}, can be chosen by using the distance betweenukj andvˆk,j =1,2, ..., M(see [1,22]).
(iii) We may assume without loss of generality that bifunctionsfi,i=1,2, ..., N andgj,j =1,2, ..., M, satisfying assumptions(A4)with the same Lipschitz- type constantsL1andL2.
Before going to prove the convergence of Algorithm 1, let us recall the following result which was in [2,42]
Lemma 3.1 ([2,42]) Suppose that fi, i = 1,2, ..., N, satisfy assumptions (A1), (A2), (A4) such that∩Ni=1Sol(Ci, fi)is nonempty. Then, for eachi=1,2, ..., N, we have:
(i) ρki[fi(xk, y)−fi(xk, yik)] ≥ yik−xk, yik−y, ∀y∈Ci.
(ii) zki −x∗2 ≤ xk −x∗2−(1−2ρikL1)xk −yik2−(1−2ρkiL2)yik − zik2, ∀x∗∈ ∩Ni=1Sol(Ci, fi), ∀k.
We are now in a position to prove the convergence of Algorithm 1.
Theorem 3.1 Let bifunctionsfi,gj satisfy assumptionsA, on Ci andQj, respec- tively, for alli = 1,2, ..., N, j = 1,2, ..., M. Let A : H1 → H2be a bounded
linear operator with its adjointA∗. If = {x∗ ∈ N
i=1Sol(Ci, fi) : Ax∗ ∈ M
j=1Sol(Qj, gj)} = ∅, then the sequences{xk},{yik},{zki},i=1,2, ..., Nconverge weakly to an elementx∗ ∈ and{vjk},{ukj},j = 1,2, ..., M converge weakly to Ap∈M
j=1Sol(Qj, gj).
Proof Letp ∈ , sop∈ N
i=1Sol(Ci, fi)andAp ∈ M
j=1Sol(Qj, gj). For each i=1,2, ..., Nby Lemma 3.1, we have
zki −p2≤ xk−p2−(1−2ρkiL1)xk−yik2−(1−2ρkiL2)yik−zki2 Combining with step 2, one has
¯zk−p2 =
N
i=1
αikzki −p2=
N
i=1
αik(zik−p)2
=
N
i=1
αikzki −p2−1 2
N
i=1 N
j=1
αikαkjzki −zkj2
≤ xk−p2−
n
i=1
αki(1−2ρkiL1)xk−yik2
−
n
i=1
αik(1−2ρkiL2)yik−zki2. (3.1) Similarly, assertion (ii) in Lemma 3.1 implies that
ukj−Ap2≤ Az¯k−Ap2−(1−2rkjL1)Az¯k−vjk2−(1−2rkjL2)vjk−ukj2,∀j=1,2, ..., M.
Thus,
¯uk−Ap2 =
M
j=1
βkj(ukj −Ap)2
=
M
j=1
βkjukj −Ap2−1 2
M
i=1 M
j=1
βkiβkjuki −ukj2
≤ Az¯k−Ap2−
M
j=1
βkj(1−2rkjL1)Az¯k−vkj2
−
M
j=1
βkj(1−2rkjL2)vkj−ukj2. (3.2) We have
2A(z¯k−p),u¯k−Az¯k = 2A(z¯k−p)+ ¯uk−Az¯k−(u¯k−Az¯k),u¯k−Az¯k
= 2 ¯uk−Ap,u¯k−Az¯k −2 ¯uk−Az¯k2
= ¯uk−Ap2+ ¯uk−Az¯k2− Az¯k−Ap2
−2 ¯uk−Az¯k2
= ¯uk−Ap2− ¯uk−Az¯k2− Az¯k−Ap2. (3.3)
By definition ofxk+1, we have
xk+1−p2= PC(z¯k+μA∗(u¯k−Az¯k))−PC(p)2
≤ (z¯k−p)+μA∗(u¯k−Az¯k)2
= ¯zk−p2+ μA∗(u¯k−Az¯k)2+2μ¯zk−p, A∗(u¯k−Az¯k)
≤ ¯zk−p2+μ2A∗2 ¯uk−Az¯k2+2μ¯zk−p, A∗(u¯k−Az¯k)
= ¯zk−p2+μ2A2 ¯uk−Az¯k2+2μA(z¯k−p), (u¯k−Az¯k). Using (3.3), we get
xk+1−p2 ≤ ¯zk−p2+μ2A2 ¯uk−Az¯k2 +μ
¯uk−Ap2− ¯uk−Az¯k2− Az¯k−Ap2
= ¯zk−p2−μ(1−μA2) ¯uk−Az¯k2 +μ
¯uk−Ap2− Az¯k−Ap2
. (3.4)
In combination with (3.1) and (3.2), (3.4) becomes xk+1−p2≤xk−p2−
N
i=1
αki
(1−2ρkiL1)xk−yik2−(1−2ρkiL2)yik−zki2
−μ(1−μA2) ¯uk−Az¯k2
−μ
M
j=1
βkj(1−2rkjL1)Az¯k−vjk2−(1−2rkjL2)vjk−ukj2 . (3.5)
Since 0 < μ < 1
A2,{ρki},{rkj} ⊂ [ρ, ρ¯] ⊂ (0, min{2L11,2L1
1}),{αki},{βkj} ⊂ [α, α¯] ⊂ (0, 1]. Equation (3.5) implies that {xk −p2}is a nonincreasingly sequence.
So
klim→∞xk−p2=a(p). (3.6) From (3.5) and (3.6), we get
k→+∞lim xk−yik = lim
k→+∞yik−zki =0, ∀i=1,2, ..., N. (3.7)
k→+∞lim Az¯k−vkj = lim
k→+∞vkj−ukj =0, ∀j =1,2, ..., M. (3.8) Because lim
k→+∞xk−p =a(p),{xk}is bounded, there exists a subsequence{xkm} of{xk}such thatxkm converges weakly to somex∗as m→ +∞. Remember that xk ∈ C = ∩Ni=1Ci, ∀kandCi are closed and convex sets for alli = 1,2, ..., N.
We have thatC is also closed and convex, soCis weakly closed and thereforex∗∈ C, i.e.,x∗ ∈ Ci for alli = 1,2, ..., N. From (3.7), we also get that{yikm},{zkim} converge weakly tox∗, for alli=1,2, ..., N. Hence,{¯zkm}converges weakly tox∗, consequently{Az¯km}converges weakly toAx∗. Together with (3.8), we obtain that {vjkm},{ukjm}converge weakly toAx∗ for allj = 1,2, ..., M as m → +∞. Since {vjk} ⊂Qj,Qj is closed and convex, soAx∗∈Qj,∀j.
For eachi =1,2, ..., N andj =1,2, ..., M, assertion (i) in Lemma 3.1 implies that
ρki
m[fi(xkm, y)−fi(xkm, ykim)] ≥ ykim−xkm, yikm−y, ∀y∈Ci, and rkjm[gj(Az¯km, v)−gj(Az¯km, vkjm)] ≥ vjkm−Az¯km, vjkm−v, ∀v ∈Qj. Since
yikm−xkm, ykim−y ≥ −yikm−xkmyikm−yand vjkm−Az¯km, vjkm−v ≥ −vkjm−Az¯kmvjkm−v, it follows from the last inequalities that
ρkim
fi(xkm, y)−fi(xkm, yikm)
≥ −yikm−xkmyikm−y, and rkj
m
gj(Az¯km, v)−gj(Az¯km, vkjm)
≥ −vkjm−Az¯kmvjkm−v. Hence,
fi(xkm, y)−fi(xkm, yikm)+ 1 ρik
m
yikm−xkmyikm−y ≥0, and gj(Az¯km, v)−gj(Az¯km, vjkm)+ 1
rkj
m
vkjm−Az¯kmvjkm−v ≥0.
Lettingm→ +∞, in combination with (3.7), (3.8) and the continuity offi,gj, yield fi(x∗, y)−fi(x∗, x∗)≥0, ∀y∈Ci, ∀i=1,2, ..., N, and
gj(Ax∗, v)−gj(Ax∗, Ax∗)≥0, ∀v ∈Qj, ∀j =1,2, ..., M,
which means thatx∗∈Sol(Ci, fi)for alli=1,2, ..., NandAx∗∈Sol(Qj, gj)for allj =1,2, ..., M, orx∗∈ .
Finally, we prove{xk}converges weakly tox∗. Indeed, if there exists a subse- quence{xkn}of{xk}such that xkn x˜ withx˜ = x∗, then we havex˜ ∈ . By Opial’s condition, yields
lim inf
m→+∞xkm−x∗<lim inf
m→+∞xkm− ˜x
=lim inf
n→+∞xkn− ˜x
<lim inf
n→+∞xkn−x∗
=lim inf
m→+∞xkm−x∗. This is a contradiction. Hence,{xk}converges weakly tox∗.
Together with (3.7), we also getyik x∗andzki x∗, for alli =1,2, ..., N.
Therefore,z¯k x∗, andAz¯k Ax∗. Combining with (3.8), it is immediate that {vjk},{ukj}converge weakly toAx∗for allj =1,2, ..., M.
WhenN =M=1, thenC1=CandQ1=Q, we get the following corollary.
Corollary 3.1 Suppose thatf,gare bifunctions satisfying assumptionsAonCand Qrespectively, suppose further thatA : H1 → H2 is a bounded linear operator with its adjointA∗. Takex0 ∈ C; {ρk}, {rk} ⊂ [ρ, ρ¯], for some ρ,ρ¯ such that 0< ρ ≤ ¯ρ < min{2L11,2L1
2};0 < μ < 1
A2, and consider the sequences{xk}, {yk},{zk}, and{vk},{uk}defined by
⎧⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎨
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎩
yk=arg min
ρkf (xk, y)+1
2y−xk2:y∈C
, zk =arg min
ρkf (yk, y)+1
2y−xk2:y∈C , vk =arg min
rkg(Azk, v)+1
2v−Azk2:v∈Q , uk=arg min
rkg(Avk, y)+1
2v−Azk2:v∈Q
, xk+1=PC(zk+μA∗(uk−Azk)).
If = {x∗ ∈ Sol(C, f ) : Ax∗ ∈ Sol(Q, g)} = ∅, then the sequences{xk},{yk} and{zk}converge weakly to an elementx∗ ∈ , and{vk},{uk}converge weakly to Ax∗∈Sol(Q, g).
An important case of MSSEP is multiple set split variational inequality problem MSSVIP. In this case, we have the following corollary which not only solves MSSVIP without using any product space but also deals with the pseudo-monotone cases.
Corollary 3.2 Suppose thatFi,Gj are mappings satisfying assumptionsAonCi
andQj, respectively, for alli=1,2, ..., N,j =1,2, ..., M, andA:H1→H2be a bounded linear operator with its adjointA∗. Takex0 ∈C = ∩Ni=1Ci;{ρki}, {rkj} ⊂ [ρ, ρ¯], for someρ,ρ¯such that0< ρ≤ ¯ρ <min{2L11,2L1
2};{αki},{βkj} ⊂ [α, α¯] ⊂ (0,1], for alli = 1,2, ..., N, j = 1,2, ..., M, andN
i=1αik = M
j=1βkj = 1;
0< μ < 1
A2. Consider the sequences{xk},{yik},{zki},i =1,2, ..., Nand{vjk}, {ukj},j =1,2, ..., Mdefined by
⎧⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎨
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎪⎪
⎩
yik=PCi(xk−ρkiFi(xk))
zki =PCi(xk−ρkiFi(yik)), i=1,2, ..., N.
¯
zk =N i=1αikzki, ˆ
vk =Az¯k,
vkj =PQj(vˆk−rkjGj(vˆk))
ukj =PQj(vˆk−rkjGj(vjk)), j =1,2, ..., M.
¯
uk =M j=1βkjukj,
xk+1=PC(z¯k+μA∗(u¯k− ˆvk)).
If = {x∗ ∈ N
i=1Sol(Ci, Fi) : Ax∗ ∈ M
j=1Sol(Qj, Gj)} = ∅, then the sequences{xk},{yik},{zki},i =1,2, ..., Nconverge weakly to an elementx∗ ∈ , and{vkj},{ukj},j =1,2, ..., Mconverge weakly toAx∗∈ ∩Mj=1Sol(Qj, Gj).
In some cases, we wish to get the strong convergence algorithms to solve our problems. To do so, we combine Algorithm 1 with hybrid method [9,41] to propose the following parallel hybrid extragradient algorithm for MSSEP.
Algorithm 2(Parallel Hybrid Extragradient Algorithm for MSSEP) Initialization.Pickx0=xg∈C= ∩Ni=1Ci, choose constants 0< ρ≤ ¯ρ <min{2L11,2L1
2}, 0< α≤ ¯α≤1; for eachi=1,2, ..., N, j =1,2, ..., M, choose parameters{ρki},{rkj} ⊂ [ρ, ρ¯];
{αki},{βkj} ⊂ [α, α¯],N
i=1αki =M
j=1βkj =1;μ∈(0, A1);B0=C. Iterationk(k = 0, 1, 2, ...). Havingxkdo the following steps:
Step 1.SolveN strongly convex programs in parallel
⎧⎨
⎩
yik=arg min{fi(xk, y)+2ρ1i
ky−xk2: y∈Ci} zki =arg min{fi(yik, y)+2ρ1i
ky−xk2: y∈Ci}, i=1,2, ..., N.
Step 2.Setz¯k =N
i=1αkizki, and computevˆk =Az¯k.
Step 3.SolveMthe following strongly convex programs in parallel
⎧⎨
⎩
vkj =arg min{gj(vˆk, v)+ 1
2rkjv− ˆvk2: v∈Qj} ukj =arg min{gj(vjk, v)+ 1
2rkjv− ˆvk2: v∈Qj},j =1,2, ..., M.
Step 4.Computeu¯k =M j=1βkjukj. Step 5.Taketk =PC(z¯k+μA∗(u¯k− ˆvk)).
Step 6.DefineBk+1 = {x∈ Bk : x−tk ≤ x− ¯zk ≤ x−xk}, computexk+1=PBk+1(xg), and go toIterationkwithkis replaced by k+1.
Remark 3.2 By settingHk1= {x∈H1: x−tk ≤ x− ¯zk}, we have Hk1= {x∈H1: ¯zk−tk, x ≤ 1
2(¯zk2− tk2)}, henceHk1is a halfspace. Similarly,
Hk2 = {x∈H1: x− ¯zk ≤ x−xk}
= {x∈H1: xk− ¯zk, x ≤ 1
2(xk2− zk2)},
is also a halfspace. SinceBk+1 =Bk ∩Hk1∩Hk2, ifH1is the Euclidean spaceRn andB0is a polyhedron, then by induction we get thatBk are polyhedra for allk.