• Tidak ada hasil yang ditemukan

A modified adaptive particle swarm optimization algorithm

N/A
N/A
Ghaly Fahrian

Academic year: 2023

Membagikan "A modified adaptive particle swarm optimization algorithm"

Copied!
3
0
0

Teks penuh

(1)

A modified adaptive particle swarm optimization algorithm

Sun Rui College of Science

Xi’an University of Science and Technology Xi’an, China

e-mail: [email protected] Abstract—Particle swarm optimization (PSO) is a heuristic

stochastic evolutionary algorithm. However, standard PSO exists unbalanced exploitation and exploration, lower convergence speed. An improved technique is introduced into the standard PSO with adaptive computation of the inertia weights. After every iteration, a new competition with a random swarm is operated to jump out of the local optimum.

Four benchmark functions are selected to test the validate of the constructed algorithm. The numerical experiments results show that the proposed algorithm is effective. The convergence speed and accuracy were better than the comparison algorithm.

Keywords-Particle swarm optimization; mutation operator;

global optimization; adaptive

I. INTRODUCTION

Global optimization algorithms are an important mathematical branch. Evolutionary algorithm is a typical representative of optimization algorithms. Due to its high capability of overall search, evolutionary algorithm has been extensively applied in optimization. Users generally demand that a new optimization algorithm should fulfill four requirements:

(1) Ability to handle non-differentiable, nonlinear and multimodal cost functions.

(2) Parallelizability to cope with computation intensive cost functions.

(3) Ease of use, i.e. few control variables to steer the minimization. These variables should also be robust and easy to choose.

(4) Good convergence properties, i.e. consistent convergence to the global minimum in consecutive independent trials.

Particle swarm optimization (PSO) was designed to fulfill all of the above requirements[1]. PSO receives abroad attention and research in recent years. It has been applied in many fields, such as function optimum, modal optimum, structure optimum, engineer optimum, and so on. Like any other evolutionary algorithms, standard PSO has some shortcomings, such as poor computational efficiency, premature easily,etc[2-5]. A new improvement is proposed in this paper: an adaptive mutation operator replace the original mutation operator, and add a competition operator to avoid the remaining local optima.

II. ORIGINAL PSO

PSO is a swarm intelligence search algorithm. It was proposed by Kennedy. The idea of PSO is simulating birds’

social activities. It attempts to mimic the natural process of

group communication to share individual knowledge when such swarms flock, migrate, or hunt. In PSO, this behavior of birds is imitated by particles with certain positions and velocities in a searching space, wherein the population is called a swarm, and each member of the swarm is called a particle. Population is initialized randomly. The best position of every particle is remembered. Members of a swarm communicate good positions to each other and dynamically adjust their own position and velocity based on these good positions. The velocity adjustment is based upon the historical behaviors of the particles themselves as well as their neighbors. In this way, the particles tend to fly towards better and better searching areas over the searching process.

The searching procedure based on this concept can be described by following equations.

( )

(

()

)

2

) ( 1

) ( ) 1 (

i it

t i i t

i t

i

X Gbest rand

c

X Pbest rand

c V V

⋅ +

⋅ +

+

= ω

, (1)

) 1 ( ) ( ) 1

(+

= +

it+ t i t

i

X V

X

, (2)

) ,

,

(

it,1 it,2 it,K

t

i

x x x

X = "

,

) ,

,

(

it,1 it,2 it,K

t

i

pbest pbest pbest

Pbest = "

,

) ,

,

(

it,1 it,2 it,K

t

i

gbest gbest gbest

Gbest = "

,

t t

t+

= − − ×

max min max max ) 1

(

ω ω ω

ω

.

In these equations, t is the iteration number, rand is a random number between 0 and 1, Pbesti is the best position of the ith particle current, Gbesti is the best position of all particles. c1 and c2 are the weighting factors, max and min are the upper and lower bounds of the inertia weights respectively, K is the dimension of function.

III. ADAPTIVE PARTICLE SWARM OPTIMIZATION

A. Adaptive Particle Swarm Optimization (APSO)

In PSO, weighting factors and inertia weights are constant. It is possible to set with experience. If these parameters are bigger, PSO has lower efficiency of search.

If these parameters are smaller, it is driving diversity of population to lower levels. The fundamental reason is the phenomenon of the accumulation by the population diversity decreasing rapidly with iteration number increasing.

Concerning this issue, we proposed two improvement:

(1). An adaptive inertia weights is constructed as following,

2016 12th International Conference on Computational Intelligence and Security

978-1-5090-4840-3/16 $31.00 © 2016 IEEE DOI 10.1109/CIS.2016.123

511

Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on June 06,2023 at 09:04:50 UTC from IEEE Xplore. Restrictions apply.

(2)

° °

¯

° °

®

­

<

>

′ =

min )

3 (

) (

max )

3 (

) (

) ( ) ,

( ) 1 (

) ( ) , (

) 1 (

f x f x if

f Pbest f

f x f Pbest if f

x f

t t i

i t i

t t i

i t i

ω

, (3)

where,

f ( Pbest

i(t)

)

is the function value of best position of ith agent ,

f ( x

i(t)

)

is the function value of ith agent ,

f

min is the minimum of the function,

f

maxis the maximum of the function.

We will explain in detail in the following. New inertia weight has been changed by the current function value adaptive. In the initial of operation, inertia weight is bigger and increase the rate of convergence. The best particle will be overtaken continuously with operation of algorithm, until the optimum is found. In the operation process of PSO, the mutation rate will be decreased to reduce the disturbing to the particles.

(2). A new competition operator is introduced which can improve the rate of convergence. This so-called competition operator is evolutionary population competing with a random population. By implementing this mechanism, we can ensure object function value converging to the global optimum, and decreasing the probability of trapping local optimum.

B. Algorithm flow

Step 1. Initialization: A randomly population is generated, that is

] , , ,

[x1(0) x2(0)"x(NP0) .

Set the weighting factors c1 and c2, the maximum iteration number

t

max and

t = 0

.

Step 2. Inertia weights is computed by equation (3).

Step 3. Equations (1) and (2) are run with tth generation population [x1(t),x2(t),",xNP(t)].

Step 4. Competition: a random population is generated, that is [ , , , ]

) ( ) ( 2 ) ( 1

t NP t t

x x

x " . If f(xi)< f(xi(t)), particle xi(t) can be

replaced with particle xi. If f(xi)≥ f(xi(t)), particle xi(t) will be retained for the next generation.

Step 5. t:=t+1.

Step 6. Repeat step 2~5 until the global optimum is found or t>tmax.

IV. NUMERICAL EXPERIMENT

To validate the validity of the algorithm, comparison experiments on benchmark functions are done using two typical algorithms and the proposed algorithm - APSO.

A. Benchmark Function

We choose four benchmarks. They are Ackley function, Rosenbrock function, Rastrigrin function and Griewank function.

(1)Rosenbrock function

It is a continuous and concave function. It is a two dimensional unimodal function. Its global optimum is

0 ) 1 , , 1

( " =

f . However, it is hard to get the global optimum by ordinary optimal algorithm for its singularity.

¦

= + +

= 11

2 2

2

1 ) ( 1)]

( 100 [ )

(x in xi xi xi

f .

(2) Sphere function

It is a continuous unimodal function. This function is intended to test the rate of convergence. Its global optimum is f(0,",0)=0.

¦

=

= ni xi

x

f( ) 1 2. (3) Rastrigin function

It is a continuous and differential multimodal function with many local optimum. About 10n local minimum is in region {xi|xi∈(−5.12,5.12),i=1,2,",n}. Its global optimum is

0 ) 0 , , 0

( " =

f .

¦

= +

= ni xi xi

x

f( ) 1[ 2 10cos(2π ) 10]. (4) Grievank function

It is a multimodal function. Many local minimum around the global optimum.Its global optimum is f(0,",0)=0.

1 4000 cos

) 1

( 1 2 1 +

¸¸¹·

¨¨©§

=

¦

in=xi

ni= xii

x

f .

(5) Corana function

It is a complex multimodal function. Its global optimum is 0

) 0 , , 0

( " =

f .

¦

= °¯°®­ <

= 4

1 2

2

,

05 . 0

|

| , )) sgn(

05 . 0 ( 15 . 0 ) (

j j j

j j j j j

otherwise x

d

z x if d z z

x

f ,

with

2 . 0 ) sgn(

49999 . 2 0 .

0

j

j

j

x x

z »

¼

« »

¬

« +

=

and

} 100 , 10 , 1000 , 1

= {

d

j .

The information of benchmark functions are shown Table ĉ.

TABLE I. LIST OF BENCHMARK FUNCTIONS

The PSO algorithm and SAPSO algorithm are select to compare with the proposed APSO algorithm. The parameters of three algorithms are given in Table Ċ.

TABLE II. THE PARAMETERS OF THREE ALGORITHMS Benchmark Dimension Region Goal Rosenbrock 30 [-2.048,2.048] 50

Sphere 30 [-100,100] 10-5 Rastrigin 30 [-5.12,5.12] 100 Grievank 30 [-600,600] 10-5 Corana 30 [-1000,1000] 10-6

Parameters PSO SAPSO APSO

Population size (NP) 60 60 60

Iteration times (tmax) 1000 1000 1000 Initial inertia weight(w) 0.80 0.80 0.80

Max velocity(v) 100 100 100

Cognition learning factor(c1) 2 2 2 Social learning factor(c2) 2 2 2

512

Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on June 06,2023 at 09:04:50 UTC from IEEE Xplore. Restrictions apply.

(3)

B. Results

For each benchmark function, all algorithms are independently operated 50 times. The solution quality, mean value and optimization rate are listed in Table ċ.

As can be seen in Table ċ, the constructed APSO algorithm can obtain optimization results than the comparison algorithms for benchmark functions - Rosenbrock, Sphere, Rastrigin and Grievank. And the obtained optimal solution is very close to the global optimal value for these benchmark functions. And for the performance of optimization rate, the constructed APSO algorithm can obtain higher optimization rate than the comparison algorithm for all benchmark functions. For Sphere function, the optimization rate of the constructed APSO algorithm is 100%. This shows that constructed APSO has been in the best form of all targets. So the constructed APSO can offer highly accurate solutions than the comparison algorithm- PSO or SAPSO[5] for four given benchmark functions.

TABLE III. THE EXPERIMENT RESULTS Function Opt. Alg. Optimal

value

Mean value

Optimizati on rate Rosenbrock 0

PSO 1.46e-03 5.05e-03 5%

SAPSO 4.52e-04 2.56e-03 78%

APSO 4.68e-05 2.91e-03 81%

Sphere 0

PSO 3.40e-14 4.25e-13 86%

SAPSO 5.05e-44 3.47e-41 93%

APSO 7.29e-048 1.02e-043 100%

Rastrigin 0

PSO 3.58e-08 1.06e-06 16%

SAPSO 8.35e-030 4.68e-028 57%

APSO 6.44e-042 3.65e-30 64%

Grievank 0

PSO 7.52e-04 2.54e-03 88%

SAPSO 3.60e-08 6.04e-07 86%

APSO 8.87e-010 8.00e-08 99%

Corana 0

PSO SAPSO

APSO

V. CONCLUSION

In this article, a new improved PSO algorithm is proposed, that is adaptive particle swarm optimization. The proposed algorithm can balance exploration and exploit. A new mutation operator is introduced into the PSO in this paper. Because the mutation operator is the most important method to guide the search for the better basin and preventing the premature, it is helpful to effectively achieve the global optimum, maintain the diversity of the population, raise the ability of genetic algorithm to local search. The random population is introduced to decrease the probability of trapping local minimum and enhance ability of the global search. Four benchmark functions (Rosenbrock, Sphere, Rastrigin, Grievank) are chosen to test the optimization performance of new method APSO for solving complex optimization problem. The numerical results show that the

proposed APSO can effectively overcome the stagnation phenomenon and enhance the ability of global search.

ACKNOWLEDGMENT

This work was supported by Scientific Research Program Funded by Shaanxi Provincial Education Department (No.2013JK0583), and Natural Science Basic Research Plan in Shaanxi Province of China (No.2014JQ1034).

REFERENCES

[1] Kennedy J, Eberhart R C, “Particle swarm optimization”, Proceedings of the 1995 IEEE international conference on neural networks. Perth, Australia, 1995, pp.1942-1948.

[2] Mendes R, Kennedy J, Neves J, “The fully informed particle swarm:

Simpler, maybe better,” IEEE Trans. On Evolutionary computation, vol. 8, pp. 204-210, 2004.

[3] Chen W N, Zhang J, Lin Y, ea al. “Particle swarm optimization with an aging leader and challengers,” IEEE Trans. On Evolutionary computation, vol. 17, pp. 241-258, 2013.

[4] Zhao X C, Gao X S, Hu Z C, “Evolutionary programming based on non-uniform mutation,” Applied mathematics and computation, vol.

192, 2007, pp. 1-11.

[5] Zhan Z H, Zhang J, Li Y, et al. “Orthogonal learning particle swarm optimization,” IEEE Trans. On evolutionary computation, vol. 15, 2011, pp. 832-847.

513

Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on June 06,2023 at 09:04:50 UTC from IEEE Xplore. Restrictions apply.

Referensi

Dokumen terkait

Algoritma automatic clustering gabungan antara algoritma particle swarm optimization (PSO) dan genetic algorithm (GA) (DCPG) diterapkan untuk memperoleh jumlah

Both of the swarm intelligence optimization algorithms are tested using various sets of benchmark functions, where the results show that the HGWO-SA algorithm is improved the

Acronyms ABC: Artificial bee colony ACO: Ant colony optimization AI: Artificial intelligence AIS: Artificial immune system ANN: Artificial neural network BA: Bat algorithm BBO:

Keywords NSGA-III algorithm, multi-objective particle swarm optimization algorithm, optimal solution, Powertrain mount system stiffness, multi-objective evolutionary algorithms,

Through comparison results with other evolutionary algorithms and other multi-objective PSO algorithms, the proposed algorithm shows that it can achieve better as well as more stability

The superiority of hybrid DEPSO over the differential evolution algorithm and particle swarm optimization has also been demonstrated in other research fields, such as flexible flow shop

DOI: https://doi.org/10.15282/jmes.13.3.2019.13.0439 The fuzzy particle swarm optimization algorithm design for dynamic positioning system under unexpected impacts Viet-Dung Do and

This paper discusses application of the algorithm to the training of artificial neural network weights, Particle swarm optimization has also been demonstrated to perform well on genetic