• Tidak ada hasil yang ditemukan

DECLARATION 2 PUBLICATIONS

2.4 Optimisation techniques

2.4.5 Evolutionary algorithms

Evolutionary algorithms (EAs) are generally considered to be meta-heuristic problem solvers.

That is, they are high-level general strategies which guide other heuristics (lower-level search procedures) to find feasible solutions from a complex search landscape (Coello Coello et al., 2007). EAs operate according to Darwin’s evolutionary theory of survival of the fittest.

Evolutionary operators (i.e. mutation, recombination/crossover, and selection) operate on a given set of individuals, termed the population, in an attempt to generate solutions in an ascending scale

48 | P a g e of fitness. Parents are recombined to form children, who in turn create the next generation – which is selected based on fitness levels. Figure 2.5 illustrates the basic components of EAs.

Fitter individuals are selected from the population to become members of the next generation.

The selection operator assigns higher probability of contributing one or more children in the next generation to strings of higher fitness. Common selection techniques include roulette-wheel, tournament and ranking. Other selection techniques are the (µ+) and (µ,), where µ is the number of parent solutions and  is the number of children. The former selects the best individuals from both parent and children subsets, whereas the latter selects only from the child population.

Figure 2-5: The Basics of the Evolutionary Algorithm Source: Coello Coello et al., 2007

EAs are effective for solving complex search, classification and optimisation problems. Two main characteristics of EAs make them suitable for solving MOPs, namely i) the ability to assign a rank to each solution based on its Pareto dominance (i.e. Pareto ranking), and ii) the mechanism to maintain population diversity (i.e. density estimator). The suitability of EAs for solving MOPs is also evidenced by their ability to simultaneously consider a set of possible solutions (i.e.

population), thereby achieving a goal of finding several members of the POS in a single run of the algorithm. This characteristic makes EAs far superior to the conventional mathematical programming techniques, which must perform a series of separate runs to achieve similar results.

49 | P a g e In addition, EAs are less prone to be affected by the shape or continuity of the Pareto front; that is, they can handle concave or discontinuous Pareto fronts. By contrast, conventional methods do not easily accommodate these two issues (Coello Coello et al., 2007). EAs that are used to solve MOPs are called multi-objective optimisation evolutionary algorithms (MOEAs).

Fonseca and Fleming (1993) suggested that a multi-objective evolutionary optimisation process could be generalised through considering it as a structured interaction between the decision maker (DM) or artificial selector and an EA. In this interactive process, the DM assigns certain utility to the current set of candidate solutions, and the EA produces a new set of solutions by applying the same utility. This conceptualisation is illustrated in Figure 2.6.

Figure 2-6: The DM/EA Interactive Relationship Source: Franseca and Fleming (1993)

Coello Coello et al. (2007) mention four generic goals of the MOEA, namely i) preserving non- dominated points; ii) progressing towards points on the true pareto front or PFtrue; iii) maintaining population diversity; and iv) providing the DM with limited number of known pareto front or PFknown points to choose from.

Performance attributes of real-life engineering problems usually exhibit non-commensurable and often competing objectives. The choice of a suitable compromise solution depends not only on determining the non-dominated vector solutions, but also through the subjective preferences of the decision maker (DM). The MOP can generally be classified into four main categories, depending on when the DM articulates its preference on the various objectives – that is, never, before, during or after (Andersson, n.d.).

Evolutionary Algorithm Decision

Maker Priori

Knowledge

Utility

Results

Objective values (acquired knowledge)

50 | P a g e a. Never – when the DM does not state a preference, the min-max formulation is developed,

and the output comprises only one solution and not a set; the solution is accepted as the final optimal one.

b. Priori – when the DM aggregates various objectives into one according to a formula which represents its preference. This could be achieved through the weighted-sum approach, non-linear weighting, fuzzy logic, utility theory, acceptability functions, goal programming or lexicographic. Priori techniques involve determining the level of relative priority for each objective before the search, for example through aggregate weighting.

One of the biggest criticisms of the priori technique is that its arbitrary limitation of search space, through the ranking or weighting of objectives, might not lead to Ptrue. c. Progressive methods avail the DM preferences at the same time that the algorithm

searches through the solution space, hence they are referred to as the interactive method.

The principle is that “more interaction means better results”. In this approach, the DM’s preferences are incorporated during the search. The main challenge with the progressive technique is that when nothing much is known about the problem, the DM’s process of defining preferences may be difficult and inefficient. The quality of a solution in this case depends on how well the DM can articulate its preferences and on the DM’s subjective choices.

d. Posteriori techniques allow first for determining the POS of solutions and then presenting them to the DM. The posteriori approach therefore involves a situation in which the decision-making process occurs after the optimisation process.

Fitness allocation involves the assignment of importance level to the candidate solution based on certain criteria. Examples of fitness assignment approaches are listed below.

a. Weighted-sum approach. This entails assigning weights to each normalised vector objective function and converting the MOP to a single-objective problem with an aggregated scalar objective function. The approach is called priori since DM articulates its preferences before optimisation. There is only one resultant optimal solution in this case. An automation of the weighted-sum approach was provided by Hajela and Lin (1992), with the authors proposing a weight-based genetic algorithm for multi-objective optimisation (WBGA-MO). The algorithm allocates a different weight vector wi for each solution xi in the population during the calculation of an aggregated solution. Because wi

is embedded on the xi chromosome, multiple solutions can be simultaneously worked out in one run. In this approach, weight vectors can also promote diversity of the population.

51 | P a g e The advantage of the weighted-sum approach is its simplicity and computational efficiency. The disadvantage is that it has difficulty finding solutions that are uniformly distributed over a non-convex trade-off surface.

b. Altering objective functions. A single-objective approach may be implemented in a similar way to the vector-evaluated genetic algorithm (VEGA), which entails breaking the main population into sub-populations. In this case, the model repeatedly establishes an objective function randomly during the selection process. This approach is simple and computationally efficient. However, the shortcoming of objective switching is that the population tends to converge to suboptimal solutions.

c. Pareto ranking. Pareto ranking is based on the principle of assigning equal probability of reproduction to all non-dominated individuals in a population. The technique assigns a rank to each set of non-dominated candidate solutions. Fitness assignment is then implemented in accordance with allocated individual ranks. Pareto ranking was introduced by Goldberg (1989). Dominance-based ranking is a method of allocating a rank to the individual based on its level of dominance within a population. Relaxed dominance entails the recognition of an “inferior” solution (i.e. dominated) in a particular objective space, with the intention of compensating for such recognition by improving other objectives.