The use of metaheuristics to solve real world problems is widely accepted within the research community. These metaheuristics provide high quality solutions to important prob- lems in business, engineering, economics and science in reasonable amount of time. Although finding exact solutions in these applications is still a real challenge despite recent advances in computer technology, metaheuristics seem to be the methods of choice in many decision
2.5 Peer metaheuristics
making processes. These decision making processes are increasingly complex and more com- pute intensive due to more decision variables being used to model complex systems and more input data or parameters being utilized to capture the complexity of problem instances. In such cases, many metaheuristics have been proposed in literature and few of them are widely accepted as state-of-the-art methods, called peer metaheuristics. In this section, we give an in- sight into such peer metaheuristics, e.g., genetic algorithm (GA), differential evolution (DE), particle swarm optimization (PSO), etc.
2.5.1 Genetic algorithm (GA)
Being introduced in 1975 by Holland [51], genetic algorithm (GA) has emerged as a prac- tical, robust optimization and search method over the past few years. GA is a population based search algorithm that simulates natural evolution. The search space of GA is characterized by a collection of population individuals that puts a great deal of emphasis on the combined interactions of selection, recombination, mutation and crossover operations acting on such individuals. The objective of natural evolution in genetic algorithm is to find the individual from the search space with the best genetic base (i.e., the chromosomes with the best chance of survival). An overview of working principle of genetic algorithm is described in Algorithm 1. GA starts with an initial population generation. Then, the quality of the individuals in the population is determined and few individuals are chosen as parent population. The quality of the individual is measured with an evaluation function. A child population is generated using recombination or mutation and crossover operations from parent population. Further, a few individuals are removed from the population according to the selection criterion in order to reduce the total population to the initial size. The process is continued for a number of itera- tions, which are referred as generations. The individuals having better quality survive through the generations and represent the natural evolution process. There are several forms of GAs that use different mutation and crossover operators [52, 53] to increase the probability that the algorithm results in near-optimal solution in a reasonable number of iterations. Mutation is needed to explore new individual instances and helps the algorithm to avoid local optima.
Algorithm 1: Genetic algorithm (GA)
Generate initial population at random while(not stop )do
Select parents from population
Produce child population from selected parents Mutate individuals
Merge the child population with the main population Reduce the population by selection
end while
Output the best individual found
Crossover operation helps in increasing the average quality of the population. Performance of these operators can define the quality of fitness function being evaluated in each generation.
As some sort of heuristic measure, this fitness function defines a measure of profit or quality of the solution for the underlying problem. Mostly, GA has been employed as a stochastic procedure to obtain global optimum solution for different combinatorial optimization prob- lems such as scheduling problems [54], traveling salesman problem [55, 56] and in machine learning [57]. Further, GA has been applied to solve both single objective, multi-objective and many objective problems in literature [7, 58].
2.5.2 Differential evolution (DE)
Differential evolution (DE) [59] is one of the most popular evolutionary algorithms avail- able in literature. The algorithm functions by generating new candidate solutions in each generation that are created by combining the parent individual and several other individuals of the same population. If the newly generated candidate has better fitness, it replaces the parent individual in the next generation. The new candidate solutions are generated using DE/current-to-best/1mutation scheme followed byrand/1/expcrossover scheme in each generation. The mutation scale factor (F) and the crossover probability (Cr) are generally set to 0.8 and 0.9, respectively during implementation of DE. The setting of the two parame- ters, i.e., scale factor (F) and the crossover rate (Cr) is neither intuitive nor experimental, but rather a crucial one for the overall performance of DE algorithm. Several research methods
2.5 Peer metaheuristics
have been presented in the literature to study the settings of these two parameters and a series of analysis has been performed to recommend a stable bound for settingF andCr[60, 61].
Algorithm 2: Differential evolution (DE)
Read values of the control parameters of DE: F, Cr, and population
size N.
Generate initial population at random while(not stop )do
Generate donor population corresponding to target population via the DE/current-to-best/1 mutation scheme of DE
Generate a trial population for the target population using rand/1/exp crossover scheme
Evaluate population
Selection better individual from target population end while
Output the best individual found
However, it has been observed over the years that an efficient parameter setting in DE is very dependent on the type of problems, which validates the No Free Lunch Theorem [62] with reference to the DE variants. Apart from this parameter variations in DE schemes, different mutation strategies have been employed to offer an alternative perspective to the DE search and tend to increase the exploitative pressure within the search space to maintain diversity [61]. There are also several variants of DE having different mutation strategies and employment of self-adaptation of parameters, such as SADE [63], JADE [64], etc. A more sophisticated and efficient variant of DE scheme is LSHADE [65, 66]. LSHADE and its variants along with SADE are chosen for an extensive comparison in following chapters of the thesis. These DE schemes are chosen because of their efficiency in solving various problems. These schemes have been the object of interest of high quality research for a long time.
2.5.3 Particle swarm optimization (PSO)
Particle swarm optimization (PSO) utilizes a search procedure in which a swarm of parti- cles are allowed to move in a search space with random velocity. Fitness values are evaluated
for each movement of particle. Depending upon the fitness values, the personal best posi- tion (xpbest) of the particle and global best position (xgbest) among the swarm of particles are evaluated in each iteration. According to the updated personal and global best positions, the position (x) and velocity (v) of each particle are updated using (2.1) and (2.2), respec- tively [67].
xi+1 =xi +vi+1 (2.1)
vi+1 =wivi+c1r1(xpbest−xi) +c2r2(xgbest−xi) (2.2) wi+1 = (wmax−wmin)
[T −i T
]
+wmax (2.3)
where, xi and vi denote position and velocity of a particle in ith step respectively; c1 and c2 represent acceleration coefficients; r1 and r2 denote random numbers within range [0,1];
w represents inertia used for velocity calculation; wmin andwmax denote the minimum and maximum values of inertiaw, andT denotes maximum number of iterations.
PSO makes use of the social association between individual particles to choose a random path in the multi-dimensional search space for achieving a global optimal solution. It relies on the efficient selection of constriction coefficients (w) to explore the feasible space of operation with a flexibility to exploit alternate search spaces for diversity preservation [68]. Due to the randomness involved in the variation of parameters in PSO, often particles tend to become trapped in local optima. Further, the lack of dynamic adjustment in velocity of particles result in the movement of entire swarm of particles toward local optima, causing premature convergence. With respect to the problem of premature convergence, several modifications are proposed, and a number of variants of PSO algorithm are reported in the literature [69] based on various aspects for single objective optimization, such as quantum-behaved PSO [70], chaotic PSO [71], fuzzy PSO [72], craziness-based PSO (CRPSO) [73, 74], hybrid PSO [75], PSO with L´evy flight [76, 77], PSO with aging leader and challengers [78], etc. Moreover, in order to solve multiobjective test problems, MOPSO [79] is presented, in which the cost function makes use of Pareto dominance while moving the swarm particles in each iteration