A New Optimization Based on Parallelizing Hybrid PSOGSA Algorithm
Jie Yu
1
, Hong-Jiang Wang
2
, Jeng-Shyang Pan
3
, Kuo-Chi Chang
2,4,5
, Truong-Giang Ngo
6 (B)
, and Trong-The Nguyen
2,7 1
College of Mechanical and Automotive Engineering, Fujian University of Technology, Fuzhou 350118, China
2
Fujian Provincial Key Laboratory of Big Data Mining and Applications, Fujian University of Technology, Fuzhou, China
[email protected], [email protected] 3
College of Computer Science and Engineering, Shandong University of Science and Technology, Shandong, China
4
College of Mechanical and Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan
5
Department of Business Administration, North Borneo University College, Sabah, Malaysia 6
Thuyloi University, 175 Tay Son, Dong Da, Hanoi, Vietnam [email protected]
7
Haiphong University of Management and Technology, Haiphong, Vietnam
Abstract. This study suggests a new metaheuristic algorithm for global optimiza- tion, based on parallel hybridizing the swarm optimization (PSO) and Gravitational search algorithm (GSA). Subgroups of the population are formed by dividing the swarm’s community. Communication between the subsets can be developed by adding strategies for the mutation. Twenty-three benchmark functions are used to test its performance to verify the feasibility of the proposed algorithm. Compared with the PSO, GSA, and parallel PSO (PPSO), the findings of the proposed algo- rithmreveal thatthe proposedPPSOGSAachieveshigher precisionthan other competitor algorithms.
Keywords: Parallel PSOGSA algorithm·Mutation strategy·Particle swarm optimization·Gravitational search algorithm
1 Introduction
Nowadays,the metaheuri sticalgorithms havebeenused inmany industries,such as power, transportation, aviation [1], and other fields. There are three kinds of metaheuris- tic algorithms inspired by nature: those generated by natural physical phenomena, those generated by biological evolution, and those generated by the living habits of populations.
Now, there are many representative algorithms in each of them, such as gravitational search algorithm(GSA) [2], simulated annealing algorithm (SA) [3] and black hole (BH)
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021 J.-S. Pan et al. (eds.),Advances in Intelligent Information Hiding and Multimedia Signal Processing, Smart Innovation, Systems and Technologies 212,
https://doi.org/10.1007/978-981-33-6757-9_22
[4], which are beneficial representatives inspired by natural physical manifestation. Dif- ferential evolution (DE) [5] and genetic algorithms (GA) [6] are metaheuristic algorithms inspired by biological evolution process in nature, and more metaheuristic algorithms are generated by living habits of the population. For example, PSO [7], gray wolf algorithm (GWO) [8], firefly algorithm (FA) [9], and whale optimization algorithm (WOA) [10]
are powerful algorithm representatives. For most of the metaheuristic algorithms, this method has a slower convergence rate and is prone to locally optimal solutions. Many researchers have proposed a variety of hybrid algorithms to improve this phenomenon, mainly using the advantages of integrating different algorithms to enhance the mining ability, whether it is partial or comprehensive. Its performance is better than the single optimization algorithm beforemixing [11–13]. For example,in 2010,Seyedali Mir- jalili and others proposed a hybrid algorithm of PSO and GSA (PSOGSA) [14], which combines the advantages of PSO and GSA, making its performance superior to the orig- inal PSO andGSA algorithm. Some scholars have proposed the clustering algorithm to improve performance. For example, the parallel particle swarm optimization (PPSO) algorithm [15–17] proposed by scholars, the leapfrog algorithm of clustering, is a good representative. However, the idea of mixing and clustering helps improve the algorithm’s performance, and this study put forward a new optimization algorithm based on a par- allel hybridPSOGSA algorithm brought addingoptimal solution’smutation strategy to enhance the algorithm’s performance. 23 benchmark functions are selected for the performance test of the improved algorithm and compared with related four algorithms.
2 Three Standard Algorithms
The PSO mathemat ical is as follows:
v
d
i (t+1)=ωv
d
i(t)+c1·rand· pbest
d i −x
d i(t)
+c2·rand· gbest−x
d
i(t) (1)
x
d
i(t+1)=x
d i(t)+v
d
i (t+1) (2)
In the formula,v
d
i(t)indicates that theith particled(d ∈ {1,2, . . . ,D}, then defined spatial dimension) dimension speed isD, theith particle current position isx
d i(t), the inertia weight isω, furthermorec,c2are two constants, and the random number is rand in t he range of [0, 1],gbest is the optimal solution currently obtained.
Themathematical model of the GSA algorithm canbe expressedby aseries of formulas as follow Formulas (3)–(5):
vel
d
i(t+1)=rand·vel
d i(t)+a
d
i(t) (3)
S
d
i (t+1)=S
d
i (t)+vel
d
i(t+1) (4)
a
d
i(t+1)=F
d i (t)/M
d
i (t) (5)
Theith particle position whereS
d
i(t)represents, theith particle velocity is vel
d i(t), a
d
i(t)is the acceleration of theith particle,M
d
i (t)is the inertia mass of theith particle, theith particle resultant force isF
d
i (t). The calculation method of inertia mass is shown in Eqs. (6) and (7).
mi(t)=
fiti(t)−worst(t) best(t)−worst(t)
(6)
Mi(t)=mi(t)/
N
j=1
mj(t) (7)
where the ith particle fitness function value represents fiti(t),best(t) represents the optimal global value obtained currently, and the worst fitness value currently obtained is worst(t), the total number of particles isN. With inertial mass, the interaction force between particles can be expressed as:
F
d ij(t)=
G(t)·Mi(t)·Mj(t)
Rij(t)+ε
· X
d j (t)−X
d
i (t) (8)
TheF
d
ij(t)represents thed-dimension gravity between particlesiandj,Rij(t)rep- resents the Euclidean distance between particlesiandj,εis a constant,G(t)is constant of gravity, and its expression is shown in Eq. (9).
G(t)=G0·exp(−α·t/maxt) (9)
whereαandG0are constant, the set a maximum number of iteration is smaxt. With the support of the above mathematical formula, the expression of the resultant force is:
F
d i (t)=
N
j=1,j=i
randj·F
d
ij(t) (10 )
The formula of the PSOGSA algorithm can be expressed by Formulas (11) and (12).
V
d
i (t+1)=ωV
d i (t)+c
1·rand1·a
d i(t)
+c
2·rand2· gbest
d
−X
d
i (t) (11)
X
d
i (t+1)=X
d i (t)+V
d
i (t+1) (12)
wherec
1,c
2are constant, rand1, rand2are random number belonging to [0, 1], the inertia weight coefficient isω, and theith particle velocity representsV
d
i (t), theith particle acceleration isa
d
i(t), and its calculation method is the same as that in GSA.X
d i (t)is the position,iparticles in thed dimension under the number iterationst. The current optimal solution representsgbest
d
.
3 Parallel PSOGSA
This paper proposes a parallel PSOGSA. The idea is to divideN(all particle individuals) into G subgroups, let G subgroups runPSOGSA independentlytofind the optimal value, and letGsubgroups communicate with each other under a specific number of iterations, so as toshowthe advantagesof cooperationbetween subsets,so thatthe subgroups can continuously update to the high-quality solution. In this study, there are two communicationstrategies for subgroupcommunication, andthe aforementioned strategies are all triggered by a specific number of iterations. Strategy (1): If you want thealgorithm to quickly jump out of the local optimal solutiontrap, a mutation idea is proposed, which is far away from the optimal global solution. In each subgroup, the same numbers of individuals are randomly selected for mutation. In the case of a specific number of iterations triggered byR, update according to Formula (13).
Xij=X
gbest k
×γ (13)
whereXij(iis a randomly selected individuali∈ {1,2, . . . ,sizepop/G}in the subgroups, sizepop is the maximum number of particles,j∈ {1,2, . . . ,dim}, dim is the dimension of search space) is the current solution of the selected individual, X
gbest
k is the kth
dimension value of the currently obtained global optimal solution (k∈ {1,2, . . . ,dim}), γ is a number in the range [0.1, 1].
Strategy (2): choose two subgroupsGaand Gbarbitrarily fromtheG subgroups, and when R1 is triggered bya specificnumber of iterations, Ga,Gb communicates with the remainingGk(k∈ {a,/ b})about the optimal value and optimal solution. The communication method is shown in Fig.1.
Fig. 1. Schematic diagram of communication strategy 2
In Fig.1,kis the number of the current iteration,Ris the trigger condition, max is the maximum number of cycles,Gi (i ∈ {1,2, . . . ,n}) represents the subgroups,n is the groups’ average number,G1andG3are two selected subgroups. The optimal global solution of subgroups is perturbed in a small range of variation, which can further expand the search scope of the avoid falling and algorithm into local optimum. The method is
shown in Formulas (14) and (15):
Wd(t)=
⎛
⎝
Popsize i=1
V
d
i (t)/PopSize
⎞
⎠ (14)
gbest
∗
d(t)=gbest
∗
d(t)+Wd(t)·N(0,1) (15)
where: V
d
i (t) and popsize are the same as before, N(0,1) is the standard normal distribution function, gbest
∗
d(t) is the optimal solution currently obtained by the subgroups.
Taking groupG=4 as an example, Fig.2shows the process of PPSOGSA algorithm using two communication strategies, wherekis set toRthat is the starting condition of the first strategy,k=R1is the starting condition of the second strategy, andk =max is the endingcondition of the algorithm cycle. Thatis to say, every R iterations of the algorithm, PPSOGSA use strategy (1) for communication, and everyR1iterations, PPSOGSA uses strategy (2) for communication.
Fig. 2. Take the grouping into four groups as an example, the communication method of PPSOGSA
The PPSOGSA algorithm pseudo-code is shown in Table1.
4 E xperimental Data
ThePPSOGSA algorithmperformance istested by23 benchmark functions. In this experiment, the objective function is each function minimum value in the corresponding range. The parameters of various optimization algorithms are set as follows. For PSO, the following settings are used:C1=C2=1.5,ωis linearly reduced from 0.9 to 0.2, the maximum speedvmax =5,v
min = −5. GSA uses the following settings:G0 =100 , α =23. For PSOGSA,c
1 =0.5,c
2 =1.5,G =1,α =23,ω is a random number of [0, 1]. For PPSO,C1=C2=1.5,ωdecreases linearly from 0.9 to 0.2,vmax =5, v
min = −5. For PPSOGSA, the following settings are used:G=1,α=23,ω is [0, 1]
random number, divided into four groups,R=10,R1=50. The maximum number of iterations of all algorithms is 1000, and the search agent is set to popsize=60. The way
Table 1. Pseudo-code of PPSOGSA Initialization:
InitializeNparticles and divide them intoGgroups randomly and evenly, the largest
generation max iteration, set communication strategy trigger conditionsRandR1. Initialization of gravitational constant, inertia mass, acceleration
Iteration:
1: WhileT <max iterationdo
2: Update the gravitational constant through Formula (9) 3: Forgroups=1 toGdo
4: Fori=1 toN/Gdo
5: Calculate the fitness function value of each particle
6: Update global optimal solutions and optimal values of the subgroups and the whole population
7: endFor
8: Update the inertia mass, gravity, acceleration particle speed and position
9: According to the updated particle velocity, using Formulas (14) and (15) to update the global optimal disturbance momentum
10: endFor
11: If T is an integral multiple ofR, use strategy 1 for communication 12: If T is an integral multiple ofR1, use strategy 2 for communication 13: T =T+1
14: endWhile
of this experiment is that each algorithm runs 20 times independently, and the average value of 20 experimental data is obtained to the experimental result, which is shown in Table2.
The bold numbers in Table2are the best values obtained, divided into the best average value and the best optimal value. According to the statistical analysis of Table2, for the best average value: PSO, GSA, PSOGSA, PPSO, PPSOGSA, the number of functions with the best performance is 5, 7, 6, 7, 17, respectively. For the best optimal value in the test process: PSO, GSA, PSOGSA, PPSO, PPSOGSA, the number of functions with thebest performanceis10,11,12,10,18,respectively. From thestatistical data,as a result, we confirmthat theoverallperformance ofPPSOGSA is better than PPSO, GSA, PSOGSA, and PSO. Under the objective function of the multi-dimensional, the function value accuracy is higher, and it is closer to the function optimal value. Figure3 is the convergence curve of some selected benchmark functions. After comparing the convergence curves of each algorithm, it canbe found that the PPSOGSA algorithm proposed in this paper has a faster convergence speed and its performance is better than the four optimization algorithms compared in the figure.
Table2.ComparisonthePPSOGSAwiththeGSA,PSO,PSOGSA,andPPSOalgorithmsbasedon23testfunctions FPSOGSAPSOGSAPPSOPPSOGSA AveBestAveBestAveBestAveBestAveBest F16.52E−071.24E−091.33E−177.50E−182.48E−191.35E−193.48E−655.84E−694.09E−1438.90E−144 F26.74E−032.18E−041.82E−081.41E−083.94E−091.68E−092.78E−654.48E−707.01E−732.72E−73 F38.21E−014.61E−022.09E+028.40E+011.93E+022.27E+014.15E−561.21E−604.11E−1391.62E−140 F41.60E−017.42E−022.37E−091.74E−094.16E+011.16E+017.84E−321.33E−337.18E−713.01E−71 F54.17E+019.05E−012.61E+012.58E+013.17E+012.15E+012.79E+012.72E+012.43E+012.38E+01 F64.82E+001.40E+002.27E−171.02E−202.56E−191.76E−193.32E−029.24E−034.52E−124.18E−15 F79.56E−013.08E−021.59E−028.30E−034.26E−021.63E−029.46E−045.61E−052.46E−047.01E−06 F8−5464.62−6848.82−2790.40−4123.50−7611.80−9904.59−7150.20−8463.31−8171.93−9303.21 F97.18E+011.36E+011.60E+019.95E+001.18E+025.97E+013.98E+012.00E+008.49E+014.19E+01 F105.23E+003.09E+001.56E−101.18E−107.44E−103.20E−103.63E−113.11E−121.07E−158.88E−16 F119.04E−015.77E−017.20E+002.61E+002.41E−024.53E−112.21E−024.11E−091.09E−021.11E−16 F124.36E+002.21E+005.28E+012.21E−042.91E+002.09E−011.87E−023.27E−046.73E−121.18E−13 F133.62E+015.82E+002.14E+004.10E−033.33E−195.15E−201.35E−013.42E−035.93E−101.54E−14 F140.9980.9984.5370.9981.9870.9981.7900.9981.0470.998 F153.558E−033.075E−041.549E−035.330E−042.480E−033.075E−048.288E−043.075E−043.107E−043.075E−04 F16−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316−1.0316 F170.39790.39790.39790.39790.39790.39790.39790.39790.39790.3979 F183333333333 (continued)
Table2.(continued) FPSOGSAPSOGSAPPSOPPSOGSA AveBestAveBestAveBestAveBestAveBest F19−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628−3.8628 F20−3.1899−3.3220−3.3220−3.3220−3.2865−3.3220−3.2238−3.3220−3.3220−3.3220 F21−8.2647−10.1532−6.8402−10.1532−7.0124−10.1532−10.1532−10.1532−10.1532−10.1532 F22−8.3919−10.4029−10.4029−10.4029−7.2871−10.4029−10.4029−10.4029−9.8755−10.4029 F23−9.5902−10.5364−10.2117−10.5364−6.2309−10.5364−10.5364−10.5364−9.9980−10.5364 ThesignificanceofboldintheTableisthebestoneincomparisonalgorithms
F1 F5
F8 F10
F11 F21
Fig. 3. The convergence curve of some selected functions
5 Conclusion
In this study,we introduced aparallelPSOGSA algorithmbased onhybridizingthe PSOGSA algorithm and using the idea of clustering. The concept of mutation and the interactionbetweensubgroups are used tomake the algorithmapproach totheopti- mal value. Twenty-three benchmarkfunctions are used for evaluating the PPSOGSA algorithm performance. The obtained results are compared with PSOGSA, GSA, PSO, and PPSOalgorithm showsthat the proposedPPSOGSAalgorithm providesoverall performance that is better than the other four optimization algorithms.
Acknowledgements. Thiswork was supportedin part byFujian provincialbusesand special vehicles R&D collaborative innovation center project (Grant Number: 2016BJC012).
References
1. Nguyen, T.T., Pan, J.S., Dao, T.K.: An improved flower pollination algorithm for optimizing layouts of nodes in wireless sensor network. IEEE Access7, 75985–75998 (2019)
2. Rashedi, E., Nezamabadi-Pour, H., Saryazdi, S.: GSA: a gravitational search algorithm. Inf.
Sci.179(13), 2232–2248 (2009)
3. Van Laarhoven, P.J., Aarts, E.H.: Simulated annealing. In: Simulated Annealing: Theory and Applications, pp. 7–15. Springer, Berlin (1987)
4. Hatamlou, A.: Black hole: a new heuristic optimization approach for data clustering. Inf. Sci.
222, 175–184 (2013)
5. Price, K.V.: Differential evolution.In: Handbookof Optimization,pp. 187–214. Springer, Berlin (2013)
6. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95- International Conference on Neural Networks, vol. 4, pp. 1942–1948. IEEE (1995) 7. Shi, Y.: Particle swarm optimization: developments, applications and resources. In: Proceed-
ings of the 2001 Congress On Evolutionary Computation (IEEE Cat. No. 01TH8546), (2001), vol. 1, pp. 81–86. IEEE
8. Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey wolf optimizer. Adv. Eng. Software69(2014) 9. Yang, X.-S.: Firefly algorithm. In: Nature-Inspired Metaheuristic Algorithms, vol. 20, pp. 79–
90 (2008)
10. Mirjalili, S., Lewis, A.: The whale optimization algorithm. Adv. Eng. Software95(2016) 11. Esmin, A., Lambert-Torres,G., Alvarenga,G.B.: Hybrid evolutionary algorithmbased on
PSO and G.A. mutation. In: Sixth International Conference on Hybrid Intelligent Systems (HIS’06), 2006, pp. 57–57. IEEE (2006)
12. Nguyen, T.-T., Qiao, Y., Pan, J.-S., Chu, S.-C., Chang, K.-C., Xue, X., Dao, T.-K.: A hybridized parallel bats algorithm for combinatorial problem of traveling salesman. J. Intell. Fuzzy Syst.
Preprint 1–10 (2020).https://doi.org/10.3233/jifs-179668
13. Nguyen, T.-T., Pan, J.-S., Chu, S.-C., Roddick, J.F., Dao, T.-K.: Optimization localization in wireless sensor network based on multi-objective firefly algorithm. J. Netw. Intell.1, 130–138 (2016)
14. Mirjalili, S., Hashim, S.Z.M.: A new hybrid PSOGSA algorithm for function optimization.
In: 2010 International Conference on Computer and Information Application, pp. 374–377.
IEEE (2010)
15. Chang, J.-F.,Roddick,J.F.,Pan,J.-S.,Chu,S.-C.: Aparallelparticleswarm optimization algorithm with communication strategies (2005)
16. Chang, K.C., Chu, K.C., Wang, H.C., Lin, Y.C., Pan, J.S.: Energy saving technology of 5G base station based on internet of things collaborative control. IEEE Access8, 32935–32946 (2020)
17. Chang, K.-C., Chu,K.-C., Wang, H.-C.,Lin, Y.-C., Pan, J.-S.:Agent-basedmiddleware framework usingdistributed CPSforimprovingresource utilizationin smartcity.Futur.
Gener. Comput. Syst.108, 445–453 (2020)