• Tidak ada hasil yang ditemukan

3.2 Pseudomonotone equilibrium and common fixed point problems

3.2.3 Numerical experiments

In this section, we provide some numerical experiments to show the convergence rate/performance of our proposed iterative method, Algorithm3.2.2and compare it with some related meth- ods in the literature.

In the numerical experiments, for our proposed Algorithm 3.2.2, we consider the case for which m= 5 and choose µ= 0.9, γn= (2n+1)1 , ηn,0 = 2n+1n , ηn,j = 5(2n+1)n+1 , j = 1,2, . . . ,5.

In Appendix 9.1.1, we choose λn = 0.3, βn = 2n+1n . In Appendix 9.1.2, we choose ξ = 0.1, v = 0.4 and in Appendix 9.1.3, we take N = 1. In Tables 3.1.17, 3.1.18 and 3.1.19, Iter. means the number of iterations while CPU means the CPU time in seconds.

Firstly, we consider the following two examples in finite dimensional spaces.

Example 3.2.12. Let the bifunction f be defined by f(x, y) = [F(x)]T (y−x),

where T is the transpose of F(x) and F(x) = M x+P(x) with M an p×p symmetric semidefinite matrix and P is defined by:

P(x) = arg min ∥y∥4

4 +1

2∥y−x∥2 :y∈Rp

.

We let C = {x ∈ Rp : Ax ≤ b}, where A ∈ Rq×p and b ∈ Rq with q = 10. The bifunction f is pseudo-monotone and satisfies conditions (A1)-(A4) of Assumption A.

In addition, for j = 1,2, ...m, let Dj : Rp → Rp be defined by Dj(x) = 3jx = Tj(x). It is easy to verify that Dj is Bregman quasi-nonexpansive for each j = 1,2, ..., m. We choose u=x1 = (1,1, ...,1), x0 = 2u∈C and different cases of p.

Example 3.2.13. Consider the Nash-Cournot oligopolistic equilibrium model [186]. Let the bifunction f ∈Rm be defined as

f(x, y) =⟨P x+Qy+r, y−x⟩,

where r is a vector in Rp and P, Q are two matrices of order p such that Q is symmetric positive semidefinite and Q−P is symmetric negative semidefinite. We define the set C such that C = {x ∈ Rp : Ax ≤ b}, where A ∈ Rq×p is a random matrix and b ∈ Rq with q = 10. It is easy to see that f is pseudo-monotone and satisfies conditions (A1)-(A4) of Assumption 3.2.1 with the Lipschitz constants c1 =c2 = ∥P−Q∥2 .

For the optimization program in Algorithm 3.2.2, we have the following steps:

un = arg min

z∈C{1

2zTHnz+bTnz}, where Hn = 2ρnQ+I and bnn[(P −Q)wn+r]−wn; and

vn= arg min

z∈ζn

1

2zTnz+ ¯bTnz

, (3.55)

where H¯n = 2ρnQ+I and ¯bn = ρn[(P −Q)wn+r]−wn. Also, since φn ∈ ∂2f(wn, un), we see that φn= 2Qun+ (P −Q)wn+r. Furthermore, letting an= (I−ρn(P −Q))wn− (2ρnQ+I)un−ρnr, we get that ζn ={x∈Rp :⟨an, x⟩ ≤ ⟨an, un⟩}.

Equation (3.55)is a quadratic convex program which can be solved efficiently using Matlab Optimization Toolbox. In addition, forj = 1,2, ...m, letDj :Rp →Rp byDj(x) = 5jx =Tj. It is easy to verify that Dj is Bregman quasi-nonexpansive for each j = 1,2, ..., m. Also, r, P and Q are selected randomly and u =x0 = x1 = (1,1, ...,1) ∈C with different cases of p.

We next consider the following example in infinite dimensional space.

Example 3.2.14. Let E =ℓ2(R) be the linear spaces whose elements are all 2-summable sequences {xi}i=1

2 ={x:x= (x1, x2, ..., xi, ...), xi ∈R and

X

i=1

|xi|2 <∞}, with inner product ⟨., .⟩:ℓ2×ℓ2 →R and norm ∥.∥:ℓ2 →R defined by

⟨x, y⟩=

X

i=1

xiyi and ∥x∥=

X

i=1

|xi|2

!2

, where x={xi}i=1, y ={yi}i=1 ∈ℓ2. Let C={x∈E :∥x∥ ≤1}. Define the bifuntion f :C×C →R by

f(x, y) = (3− ∥x∥)⟨x, y−x⟩, ∀ x, y ∈C.

It can easily be verified that f is pseudo-monotone and satisfies conditions (A1)-(A4) of Assumption A with Lipschitz constant c1 = c2 = 52. Also, for j = 1,2, ...m, we define Dj : ℓ2 → ℓ2 by Dj(x) = 2jx = Tj(x). It is easy to verify that Dj is Bregman quasi- nonexpansive for each j = 1,2, ..., m.

We test Example 3.2.12, Example 3.2.13 and Example 3.2.14 under the following experi- ments:

Experiment 3.2.15. In this experiment, we check the behavior of our method by fixing the other parameters and varying θ in Example 3.2.12. We do this to check the effects of this parameter and the sensitivity of our method on it.

We choose qn = 2n+1n u, gn = 2n+1n u and p∈ {5,10,15,20}. Using ∥xn+1−xn∥< 10−4 as the stopping criterion, we plot the graphs of ∥xn+1−xn∥ against the number of iterations in each case. The numerical results are reported in Fig. 3.1 and Table 3.1.17.

Experiment 3.2.16. In this experiment, we check the behavior of our method by fixing the other parameters and varying ϵn in Example 3.2.13 We do this to check the effects of this parameter and the sensitivity of our method on it.

We considerϵn∈ {(2n+1)1 3,(2n+5)2 3,(2n+1)1 4,(2n+5)2 4,(2n+1)1 5}which satisfies Assumption3.2.1 (5)(C2). We choose qn = 2n+1n u, gn = 2n+1n u and p∈ {5,10,20,30}.Using ∥xn+1−xn∥<

10−4 as the stopping criterion, we plot the graphs of ∥xn+1 −xn∥ against the number of iterations in each case. The numerical results are reported in Fig. 3.2 and Table 3.1.18.

Experiment 3.2.17. In this experiment, we check the behavior of our method by fix- ing the other parameters and varying ρ in Example 3.2.14. We do this to check the effects of this parameter and the sensitivity of our method on it. We consider ρ ∈ {0.5,1.0,1.5,2.0,2.5}. Using ∥xn+1 −xn∥ < 10−4 as the stopping criterion, we plot the graphs of ∥xn+1 −xn∥ against the number of iterations in each case. We choose u = (1,12,13,· · ·), qn= 2n+1n u, gn = 2n+1n u and consider different cases of initial values x0 and x1 as follows:

Case 1 : x0 = (15,251,1251 ,· · ·), x1 = (1,12,13,· · ·).

Case 2 : x0 = (2,1,12,· · ·), x1 = (1,12,13,· · ·).

Case 3 : x0 = (14,161,641,· · ·), x1 = (1,12,13,· · ·).

Case 4 : x0 = (79,187,367,· · ·), x1 = (1,12,13,· · ·).

The numerical results are reported in Fig. 3.3 and Table 3.1.19.

0 10 20 30 40 50 Iteration number (n)

10-5 10-4 10-3 10-2 10-1 100 101

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 3) Alg 3.2 ( = 6) Alg 3.2 ( = 9) Alg 3.2 ( = 12) Alg 3.2 ( = 15)

0 10 20 30 40 50 60

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100 101

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 3) Alg 3.2 ( = 6) Alg 3.2 ( = 9) Alg 3.2 ( = 12) Alg 3.2 ( = 15)

0 10 20 30 40 50 60 70

Iteration number (n) 10-6

10-4 10-2 100

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 3) Alg 3.2 ( = 6) Alg 3.2 ( = 9) Alg 3.2 ( = 12) Alg 3.2 ( = 15)

0 10 20 30 40 50 60 70

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100 101

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 3) Alg 3.2 ( = 6) Alg 3.2 ( = 9) Alg 3.2 ( = 12) Alg 3.2 ( = 15)

Figure 3.1: Top left: p = 5; Top right: p = 10; Bottom left: p = 15; Bottom right: p = 20.

Table 3.1.17. Numerical results for Example 3.2.12 (Experiment 3.2.15).

Cases App.

9.1.1

App.

9.1.2

App.

9.1.3

Alg.

3.2.2 (θ = 3)

Alg.

3.2.2 (θ = 6)

Alg.

3.2.2 (θ= 9)

p=5 CPU

Iter.

0.0819 9

0.0478 14

0.3300 48

0.0356 5

0.0318 5

0.0323 5

p=10

CPU Iter.

0.0870 10

0.0462 11

0.4494 56

0.0501 6

0.0481 6

0.0603 6

p=15

CPU Iter.

0.1107 11

0.0513 12

0.5404 61

0.0574 6

0.0541 6

0.0527 6

p=20

CPU Iter.

0.1138 11

0.0608 12

0.4254 65

0.0723 7

0.0554 7

0.0575 7

Table 3.1.18. Numerical results for Example 3.2.13 (Experiment 3.2.16).

0 10 20 30 40 50 Iteration number (n)

10-5 10-4 10-3 10-2 10-1 100 101

Errors

App 8.1 App 8.2 App 8.3

Alg 3.2 (n = 1/(2n + 1)3) Alg 3.2 (

n = 2/(2n + 5)3) Alg 3.2 (n = 1/(2n + 1)4) Alg 3.2 (n = 2/(2n + 5)4) Alg 3.2 (

n = 1/(2n + 1)5)

0 10 20 30 40 50 60

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100 101

Errors

App 8.1 App 8.2 App 8.3

Alg 3.2 (n = 1/(2n + 1)3) Alg 3.2 (

n = 2/(2n + 5)3) Alg 3.2 (n = 1/(2n + 1)4) Alg 3.2 (n = 2/(2n + 5)4) Alg 3.2 (

n = 1/(2n + 1)5)

0 10 20 30 40 50 60 70

Iteration number (n) 10-6

10-4 10-2 100

Errors

App 8.1 App 8.2 App 8.3

Alg 3.2 (n = 1/(2n + 1)3) Alg 3.2 (n = 2/(2n + 5)3) Alg 3.2 (

n = 1/(2n + 1)4) Alg 3.2 (n = 2/(2n + 5)4) Alg 3.2 (n = 1/(2n + 1)5)

0 10 20 30 40 50 60 70 80

Iteration number (n) 10-4

10-2 100 102

Errors

App 8.1 App 8.2 App 8.3

Alg 3.2 (n = 1/(2n + 1)3) Alg 3.2 (n = 2/(2n + 5)3) Alg 3.2 (

n = 1/(2n + 1)4) Alg 3.2 (n = 2/(2n + 5)4) Alg 3.2 (n = 1/(2n + 1)5)

Figure 3.2: Top left: p = 5; Top right: p = 10; Bottom left: p = 20; Bottom right: p = 30.

Cases App.

9.1.1

App.

9.1.2

App.

9.1.3

Alg.

3.2.2 (ϵn =

1 (2n+1)3)

Alg.

3.2.2 (ϵn =

2 (2n+5)3)

Alg.

3.2.2 (ϵn =

1 (2n+1)4)

p=5 CPU

Iter.

0.1510 10

0.1280 22

0.6744 47

0.0838 7

0.0852 7

0.0930 7

p=10

CPU Iter.

0.0949 12

0.0786 23

0.4007 57

0.0781 8

0.0620 8

0.0580 8

p=20

CPU Iter.

0.1115 13

0.0480 11

0.4606 69

0.0782 9

0.0848 9

0.0754 9

p=30

CPU Iter.

0.0892 13

0.0803 13

0.5478 77

0.0551 9

0.0723 9

0.0804 9

Table 3.1.19 Numerical results for Example 3.2.14 (Experiment 3.2.17).

0 5 10 15 20 25 30 35 40 45 Iteration number (n)

10-5 10-4 10-3 10-2 10-1 100

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 0.5) Alg 3.2 ( = 1.0) Alg 3.2 ( = 1.5) Alg 3.2 ( = 2.0) Alg 3.2 ( = 2.5)

0 5 10 15 20 25 30 35 40 45

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 0.5) Alg 3.2 ( = 1.0) Alg 3.2 ( = 1.5) Alg 3.2 ( = 2.0) Alg 3.2 ( = 2.5)

0 5 10 15 20 25 30 35 40 45

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 0.5) Alg 3.2 ( = 1.0) Alg 3.2 ( = 1.5) Alg 3.2 ( = 2.0) Alg 3.2 ( = 2.5)

0 5 10 15 20 25 30 35 40 45

Iteration number (n) 10-5

10-4 10-3 10-2 10-1 100

Errors

App 8.1 App 8.2 App 8.3 Alg 3.2 ( = 0.5) Alg 3.2 ( = 1.0) Alg 3.2 ( = 1.5) Alg 3.2 ( = 2.0) Alg 3.2 ( = 2.5)

Figure 3.3: Top left: Case 1; Top right: Case 2; Bottom left: Case 3; Bottom right: Case 4.

Cases App.

9.1.1

App.

9.1.2

App.

9.1.3

Alg.

3.2.2 (ρ= 0.5)

Alg.

3.2.2 (ρ= 1.0)

Alg.

3.2.2 (ρ= 1.5)

1 CPU

Iter.

0.0054 24

0.0047 24

0.0044 44

0.0152 9

0.0113 9

0.0113 9

2 CPU

Iter.

0.0037 20

0.0067 24

0.0041 44

0.0115 9

0.0098 9

0.0099 9

3 CPU

Iter.

0.0008 24

0.0018 24

0.0012 44

0.0029 9

0.00251 9

0.0031 9

4 CPU

Iter.

0.0026 20

0.0036 24

0.0027 44

0.0027 9

0.0031 9

0.0044 9

Remark 3.2.18. We use distinct starting points, values ofpand vary the other parameters in Experiment 3.2.15, Experiment 3.2.16 and Experiment 3.2.17 respectively, we get the numerical results shown in Tables 3.1.17-3.1.19 and Figures 3.1-3.3. We compared our method, Algorithm3.2.2with the methods in Appendix9.1.1, Appendix9.1.2and Appendix 9.1.3.

In addition, we observe the following from our numerical tests and experiments:

ˆ In the numerical experiments, we randomly selected the parameters and noted that regardless of the choices made, the number of iteration does not change and no sig- nificant difference in the CPU time.

ˆ In Experiment 3.2.15, we examine the sensitivity of θ for each case of p in order to know if the choices of θ affect the performance of our method. Clearly, from Table 3.1.17 and Fig. 3.1, the number of iterations for our method is well behaved for θ ∈ {3.0,6.0,9.0,12.0,15.0}. In addition, there is no significant difference in the CPU time as we vary the value of θ.

ˆ In Experiment 3.2.16, we examine the sensitivity of ϵn for each case of p in order to know if the choices of ϵn affect the performance of our method. Clearly, from Table 3.1.18 and Fig. 3.2, the number of iterations for our method is well behaved for ϵn ∈ {(2n+1)1 3,(2n+5)2 3,(2n+1)1 4,(2n+5)2 4,(2n+1)1 5}. Moreover, there is no significant difference in the CPU time as we vary the value of βn,i.

ˆ In Experiment 3.2.17, we examine the sensitivity of ρ for each starting points in order to know if the choices of ρaffect the performance of our method. Clearly, from Table 3.1.19 and Fig. 3.3, the number of iterations for our method is well behaved for ρ ∈ {0.5,1.0,1.5,2.0,2.5}. In addition, there is no significant difference in the CPU time as we vary the value of θ.

ˆ From Tables 3.1.17-3.1.19, Fig. 3.1, Fig. 3.2 and Fig. 3.3, we noted clearly that in terms of number of iterations, our method, Algorithm 3.2.2 performs better than the existing methods in Appendix 9.1.1, Appendix 9.1.2 and Appendix 9.1.3 and no significant difference in the CPU time.

3.3 Split generalized equilibrium problem with mul-