• Tidak ada hasil yang ditemukan

4.2 Generalized split feasibility problems

4.2.3 Numerical experiments

In this section, we discuss the numerical behavior of our proposed methods; Algorithm 4.2.3, Algorithm 4.2.9 and Algorithm 4.2.11. Furthemore, we compare them with related strong convergent methods in the literature; namely, the method of Tian and Jiang [229]

and the method of Chidume and Nnakwe [64, Algorithm (3.1)] (other related methods in the literature are mostly weak convergent methods).

The codes for the numerical analysis are written in Matlab 2016 (b) and performed on a personal computer with an Intel(R) Core(TM) i5-2600 CPU at 2.30GHz and 8.00 Gb- RAM. In Tables 1-4, “Iter.” means the number of iterations while “CPU” means the CPU time in seconds. In our numerical computation, we randomly choose the relaxation stepsize ψn and the starting points x0, x1 ∈H1 (see the cases below). We also choose the parameters (randomly) γ1 >0, µ, η ∈ (0,1), α≥ 3 (the choices of these parameters will be discussed in Remark4.2.17), and the control sequencesθn= n+11 , βn= 12−θnn = ¯αn and ϵn = nθ0.01n for Algorithm 4.2.3, Algorithm 4.2.9, Algorithm 4.2.11; λn = 2L1 for Algo- rithm Tian and Jiang [229], Chidume and Nnakwe [64, Algorithm (3.1)]; λn = 3n+72n for Algorithm 4.2.11; γn = 2||T1||2 for Algorithm Tian and Jiang [229], Chidume and Nnakwe [64, Algorithm (3.1)]; and αn= n+11 for Algorithm Tian and Jiang [229].

Furthermore, we define TOLn:= 12 ||xn−JλB(xn−λAxn)||2+||T xn−ST xn||2

for Algo- rithm 4.2.3; TOLn := 12(||xn−PC(xn−λAxn)||2+||T xn−ST xn||2) for Algorithm 4.2.9, Algorithm Tian and Jiang [229], Chidume and Nnakwe [64, Algorithm (3.1)];

Number of iterations

0 100 200 300 400 500

SNR

5 10 15 20 25 30 35

Algorithm 3.2 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Original Cameraman Blurred Cameraman

Algorithm 3.2 Tian & Jiang Alg. Chidume Alg.

Number of iterations

0 100 200 300 400 500

SNR

5 10 15 20 25 30

Algorithm 3.2 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Original MRI Blurred MRI

Algorithm 3.2 Tian & Jiang Alg. Chidume Alg.

Figure 4.3. Comparison for Numerical results for image restoration problem 4.140: Top:

Cameraman; Bottom: MRI.

TOLn:= 12 ||xn−JλBxn||2+||T xn−ST xn||2

for Algorithm4.2.11; and use the stopping criterion TOLn < ε for the iterative processes, where ε is the predetermined error. Note that if TOLn= 0, then xn is a solution of the problem under consideration.

Example 4.2.14. Let H1 =H2 =L2([0,1]) be equipped with the inner product

⟨x, y⟩= Z 1

0

x(t)y(t)dt ∀ x, y ∈L2([0,1]) and ∥x∥:=

s Z 1

0

|x(t)|2dt ∀x, y,∈L2([0,1]).

Now, define the operators A, B :L2([0,1])→L2([0,1]) by Ax(t) =

Z 1 0

x(t)−

2tset+s e√

e2−1

cosx(s)

!

ds+ 2tet e√

e2−1, x∈L2([0,1]), Bx(t) = max{0, x(t)}, t∈[0,1].

Then A is Lipschitz continuous and monotone, and B is maximal monotone on L2([0,1]).

Let T :L2([0,1]) →L2([0,1]) be defined by T x(s) =

Z 1 0

κ(s, t)x(t)dt ∀ x∈L2([0,1]),

where κ is a continuous real-valued function defined on[0,1]×[0,1]. Then, T is a bounded linear operator with adjoint

Tx(s) = Z 1

0

κ(t, s)x(t)dt ∀ x∈L2([0,1]).

Let S:L2([0,1])→L2([0,1]) be defined by Sx(t) =

Z 1 0

tx(s)ds, t∈[0,1].

Then, S is nonexpansive. Indeed, we have

|Sx(t)−Sy(t)|2 =| Z 1

0

t x(s)−y(s)

|2 ≤ Z 1

0

t|x(s)−y(s)|ds

!2

≤ Z 1

0

|x(s)−y(s)|2ds=∥x−y∥2.

Now, let C = {x ∈ L2([0,1]) : ⟨y, x⟩ ≤ b}, where y = et + 1 and b = 10, then C is a nonempty closed and convex subset of L2([0,1]). Thus, we define the metric projection PC as:

PC(x) =

(b−⟨y,x⟩

||y||2 y+x, if ⟨y, x⟩> b, x, if ⟨y, x⟩ ≤b.

For Algorithm Tian and Jiang [229], we define h:L2([0,1])→L2([0,1]) by hx(t) =

Z 1 0

t

2x(s)ds, t ∈[0,1].

Then, h is a contraction.

We now consider the following cases for the relaxation stepsize ψn and the starting points x0, x1:

Case 1: Take x1(t) = t3−3, x0(t) =t and ψn = 2n+5n . Case 2: Take x1(t) = sin(t), x0(t) =t+ 1 and ψn= n+10n . Case 3: Take x1(t) = cot(t), x0(t) = sin(t) and ψn= n+10n . Case 4: Take x1(t) = et, x0(t) =t2+t+ 1 and ψn= 18n+12n .

The numerical results are displayed in Table 4.2.14 and Figure 4.4.

Table 4.2.14. Numerical results for Example 4.2.14 with ε= 10−7.

Cases Algorithm

4.2.9

Algorithm

4.2.11 Tian Jiang Chidume

Nnakwe

1 CPU

Iter.

15.8066 64

7.2564 51

21.7214 114

35.9767 158

2 CPU

Iter.

17.8827 39

7.6196 32

61.6392 96

137.3976 140

3 CPU

Iter.

20.4864 42

9.6206 35

55.5797 102

144.6658 145

4 CPU

Iter.

22.8433 82

10.4064 63

46.0686 109

56.3887 153

Example 4.2.15. Let H1 = H2 =ℓ2, where ℓ2 = ℓ2(R) := {x = (x1, x2, ..., xi, ...), xi ∈ R :

P

i=1

|xi|2 < ∞}, with inner product ⟨., .⟩ : ℓ2 ×ℓ2 → R defined by ⟨x, y⟩ := P i=1xiyi and the norm ∥.∥ :ℓ2 → R by ∥x∥ :=pP

i=1|xi|2, where x ={xi}i=1 and y= {yi}i=1. Define the mapping A :ℓ2 → ℓ2 by Ax =

x1+|x1|

2 ,x2+|x2 2|, ...,xi+|x2 i|, ...

∀x ∈ℓ2. Then, A is Lipschitz continuous and monotone (see [14]).

Let B : ℓ2 →ℓ2 be defined by B(x) = (2x1,2x2, ...,2xi, ...) ∀x∈ℓ2. Then, B is maximal monotone.

Let T :ℓ2 →ℓ2 be defined by T x= 0, x1,x22,x33, ...

, for all x∈ ℓ2. Then, T is a bounded linear operator on ℓ2 with adjoint Ty= y2,y23,y34, ...

for all y∈ℓ2.

Now, C = {x ∈ ℓ2 : ||x−a||2 ≤ r}, where a = (1,12,13,· · ·) and r = 3. Then C is a

Number of iterations

0 20 40 60 80 100 120 140 160

TOLn

10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 20 40 60 80 100 120 140

TOLn

10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 50 100 150

TOLn

10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 20 40 60 80 100 120 140 160

TOLn

10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Figure 4.4. The behavior of TOLn with ε = 10−7 for Example 4.2.14: Top Left: Case 1;

Top Right: Case 2; Bottom Left: Case 3; Bottom Right: Case 4.

nonempty closed and convex subset of ℓ2. Thus, PC(x) =

(x, if ||x−a||2 ≤r,

x−a

||x−a||

2

r+a, otherwise.

Furthermore, we define the mappings S, h : ℓ2 → ℓ2 by Sx = (0, x1, x2, . . .) and hx = 0, x21, x22,· · ·

for all x∈ℓ2, and consider the following cases for the relaxation stepsize ψn and the starting points x0, x1:

Case 1: Take x1 = (1,12,13,· · ·), x0 = (12,15,101,· · ·) and ψn= 2n+5n . Case 2: Take x1 = (12,15,101,· · ·), x0 = (1,12,13,· · ·) and ψn= n+10n . Case 3: Take x1 = (1,14,19,· · ·), x0 = (12,14,18,· · ·) and ψn = n+10n . Case 4: Take x1 = (12,14,18,· · ·), x0 = (1,14,19,· · ·) and ψn = 18n+12n . The numerical results are displayed in Table 4.2.15 and Figure 4.5.

Table 4.2.15. Numerical results for Example 4.2.15 with ε= 10−8.

Cases Algorithm

4.2.9

Algorithm

4.2.11 Tian Jiang Chidume

Nnakwe

1 CPU

Iter.

0.0566 76

0.0325 56

1.0297 449

1.0889 509

2 CPU

Iter.

0.0449 67

0.0208 51

1.0290 449

1.0679 506

3 CPU

Iter.

0.0394 74

0.0223 55

1.0268 449

1.0719 510

4 CPU

Iter.

0.0389 83

0.0199 60

1.0290 449

1.0676 510

Example 4.2.16. In many practical problems, it is important if the minimum-norm so- lutions of such problems can be found. Such problems can be formulated mathematically as (see, for instance [206, Example 3.4]):

Find x ∈H such that ∥x∥= min{∥x∥:x∈H}, (4.141) where H is a real Hilbert space. Note that (4.141) can be reformulated as the following particular variational inequality problem:

Find x ∈H such that ⟨x, x−x⟩ ≤0, ∀x∈H. (4.142) Now, let H1 =H2 =L2([a, b]), C ={x ∈L2([a, b]) : ⟨a, x⟩=b} and Q ={x∈ L2([a, b]) :

⟨a, x⟩ ≥b}, for some a∈L2([a, b])− {0} and b ∈R.

Number of iterations

0 100 200 300 400 500 600

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 100 200 300 400 500 600

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 100 200 300 400 500 600

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 100 200 300 400 500 600

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 3.2 Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Figure 4.5. The behavior of TOLn with ε = 10−8 for Example 4.2.15: Top Left: Case 1;

Top Right: Case 2; Bottom Left: Case 3; Bottom Right: Case 4.

Then,x minimizes||.||+δC if and only if0∈∂(||.||+δC) (x)andT x minimizes||.||+δQ if and only if 0 ∈ ∂(||.||+δQ) (T x), where δC (defined as δC(x) = 0 if x ∈ C and +∞, otherwise) and δQ denote the indicator functions of C and Q, respectively. Now, if we set in (4.84), A = 0, B = (||.||+δC) and F(S) = ∂(||.||+δQ), then problem (4.84) becomes the following problem:

Find x ∈C such that x =argmin{∥x∥:x∈C}, (4.143) and such that

T x ∈Q solves T x =argmin{∥y∥:y∈Q}. (4.144) It is known that the solution to problem (4.143)-(4.144) is a minimum-norm solution (see [206, Example 3.4]).

Now, for the numerical computation, we define the bounded linear operatorT as in Example 4.2.14. We also choose the starting points and relaxation stepsize as in Case 1-Case 4 of Example 4.2.14. The numerical results are reported in Table 4.2.16 and Figure 4.6.

Table 4.2.16. Numerical results for Example 4.2.16 with ε= 10−8.

Cases Algorithm

4.2.9

Algorithm

4.2.11 Tian Jiang Chidume

Nnakwe

1 CPU

Iter.

19.8207 121

15.4169 101

29.0280 136

50.9212 182

2 CPU

Iter.

26.5295 92

25.3138 83

87.9523 119

187.6199 165

3 CPU

Iter.

41.3551 95

34.5019 93

82.0133 125

204.0348 170

4 CPU

Iter.

25.2010 110

22.7278 99

77.0343 132

96.5969 178

Remark 4.2.17. We now summarize this section by highlighting some of the observations from the numerical results.

ˆ During the numerical computations, we observed that irrespective of the choices of the parameters γ1, λ1 > 0, µ, η ∈ (0,1) and α ≥ 3, the number of iteration does not change and no significant difference in the CPU time. Therefore, we randomly choose these parameters.

ˆ Throughout the experiments, we see clearly that both in CPU time and number of iterations, Algorithm 4.2.11 outperforms Algorithm 4.2.3and Algorithm 4.2.9. This is expected since Algorithm 4.2.11 does not require any evaluation of A and the stepsize λn in Algorithm 4.2.11 does not involve any form of “inner loop” during implementation, which are required in Algorithm 4.2.3 and Algorithm 4.2.9.

Number of iterations

0 50 100 150 200

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 50 100 150 200

TOLn

10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100

Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 50 100 150 200

TOLn

10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100

Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Number of iterations

0 50 100 150 200

TOLn

10-10 10-8 10-6 10-4 10-2 100 102

Algorithm 4.4 Algorithm 4.6 Tian & Jiang Alg.

Chidume & Nnakwe Alg.

Figure 4.6. The behavior of TOLn with ε = 10−8 for Example 4.2.16: Top Left: Case 1;

Top Right: Case 2; Bottom Left: Case 3; Bottom Right: Case 4.

ˆ It can also be seen from the tables and figures that Algorithm 4.2.3 and Algorithm 4.2.9 perform better than Algorithm Tian and Jian [229] and Algorithm (3.1) of Chidume and Nnakwe [64]. This is also expected due to the presence of both re- laxation and inertial steps in our methods. We can also see from the cases that irrespective of the choices of the relaxation stepsize, our proposed methods converge more than twice as fast as the other methods.

Chapter 5

Inertial Type Algorithms for Solving

Variational Inequality Problems and

Fixed Point Problems