• Tidak ada hasil yang ditemukan

Developed: USA in the 1970’s

N/A
N/A
Protected

Academic year: 2023

Membagikan "Developed: USA in the 1970’s "

Copied!
110
0
0

Teks penuh

In SGA, mutation works by generating a random number (from a uniform distribution in the range [0,1]) at each bit position and comparing it to a fixed low value (e.g., a value commonly called the mutation rate. If the random number is below this level , the value of the gene is reversed at the corresponding location.

X example: crossover

X 2 example: mutation

Although manually constructed, this example shows typical progress: the average fitness grows from 293 to 588.5, and the best fitness in the population from 576 to 729 after crossover and mutation.

The simple GA

Alternative Crossover Operators

Uniform crossover

Crossover OR mutation?

Crossover is exploratory, it makes a big jump to an area somewhere in between two (parent) areas. Mutation is exploitation, it makes a big leap to an area somewhere "in between" two (parent) areas.

Crossover OR mutation? (cont’d)

Mutation is exploratory, it creates random small derivations and thereby stays near (in the area of) the parent.

Representation of Individuals

Binary Representations

Integer representations

Real-Valued or Floating-Point Representation

Permutation Representations

Permutation representation: TSP example

Permutation Representation

Mutation

Mutation for Binary Representations

Mutation Operators for Integer Representations

According to the probability distribution from which the new gene values ​​are drawn, two types can be distinguished: uniform and non-uniform mutation. Perhaps the most common form of non-uniform mutation used with floating point representations has a form analogous to the integer creep mutation.

Mutation operators for permutations

The three most common forms of mutation used for sequence-based problems were first described in [390], While the first three operators below (particularly insertion mutation) work by making small changes to the order in which allele values avoid, for adjacent-based issues, these can lead to large numbers of links being broken, which is why inversion is more commonly used.

Swap mutation for permutations

Insert Mutation for permutations

Scramble mutation for permutations

Inversion mutation for permutations

Recombination

Usually two parents are chosen and then a random variable is drawn from [0.1) and compared to pc. If the value is lower, two offspring are created through the recombination of the two parents; otherwise, they are created without crossing (asexually), i.e. the net effect is that generally the resulting set of offspring consists of several copies of the parents and other individuals representing previously unseen solutions.

Thus, unlike the pm mutation probability, which determines how parts of the chromosome are disrupted independently, the crossover probability determines the probability of a chosen pair of parents undergoing this operator.

Recombination Operators for Binary Representations

One-point crossover can easily be generalized to N-point crossover, where the representation is broken up into more than two segments of adjacent genes, and then the offspring is created by taking alternate segments from the two parents. The previous two operators worked by dividing the parents into a number of sections of adjacent genes and recombining them to produce offspring. In contrast, uniform crossover [389] works by treating each gene independently and making a random choice from which parent it should be inherited.

In each position, if the value is below the p parameter (usually 0.5), the gene is inherited from the first parent; otherwise from another.

Positional bias vs. Distributional bias

Recombination Operators for Integer Representations

Using an operator analogous to that used for bit strings, but now split between floats. Recombination operators of this type for floating-point representations are known as discrete recombination and have the property that if we create an offspring z from parents x and y, then the allelic value for gene i is given by zi = xi or yi with equal probability. Using an operator that in each gene position creates a new allelic value in the offspring that lies between those of the parents.

Arithmetic Crossover

Simple Recombination

Single arithmetic crossover

Whole arithmetic crossover

A number of specialized recombination operators have been designed for permutations, which aim to transmit as much as possible of the information contained in the parents, especially that which is shared. We shall concentrate here on describing the best known and most commonly used operators for each subclass of permutation problems.

Crossover operators for permutations

Many specialized operators have been devised that focus on combining sequence or contiguous information from the two parents.

Partially Mapped Crossover (PMX)

Randomly select two crossover points and copy the segment between them from the first parent (P1) to the first child. Starting with the first crossover point, find the elements in that segment of the second parent (P2) that were not copied. Put i in the position occupied by j in P2, since we know we won't put j there (since we already have it in our array).

After the elements of the crossover segment are completed, the rest of the offspring can be populated from P2, and the second child is created analogously with the parent roles reversed.

PMX example

Edge Recombination

Edge Recombination example

Order crossover

Order crossover example

Cycle crossover

Put the alleles of the cycle in the first child in the positions they have in the first parent.

Cycle crossover example

Multiparent recombination

These operators can be categorized based on the basic mechanism used to combine the information from the parent individuals.

Population Models

Two different GA models are distinguished in the literature: the generation model and the steady-state model. In the steady-state model, not the whole population changes at once, but a part of it. The percentage of the population that is replaced is called the generation gap and equals .

Since its introduction in Whitley's GENITOR algorithm [423], the steady-state model has been extensively studied and applied [101.

Parent Selection

Fitness Proportional Selection

Selection pressure: when fitness values ​​are all very close together there is almost no selection pressure as the parts of the roulette wheel assigned to the individuals are more or less the same size so selection is almost uniformly random and having slightly better fitness is not very "useful" to an individual. Therefore, later in a run, when some convergence has occurred and the worst individuals have disappeared, performance only increases very slowly. The mechanism behaves differently on transposed versions of the same fitness function (highly sensitive to function transposition).

Function transposition for FPS

Windowing: To avoid the other two problems with FPS, a procedure known as windowing is often used. Sigma scaling: Another well-known approach is Goldberg's sigma scaling [172], which incorporates information about the mean f ( f ) and the standard deviation f of fitness in the population:.

Rank – Based Selection

It maintains a constant selection pressure by sorting the population on the basis of fitness, then assigning selection probabilities to individuals according to their rank, rather than according to their actual fitness values. The mapping from rank number to selection probability is arbitrary and can be done in many ways, for example linearly or exponentially decreasing, with the caveat of course that the sum over the population of probabilities must be unity. The usual formula for calculating the selection probability for linear ranking schemes is parameterized with a value s (1.0 < s  2.0).

If this individual has rank , and the weakest has rank 1, then the selection probability for an individual of rank i is:.

Linear Ranking

This arises from the assumption that, on average, an individual of median fitness should have one

Exponential Ranking

Implementing Selection Probabilities

Conceptually, this is like spinning a one-armed roulette wheel, with the sizes of the holes reflecting the odds of selection. Despite its inherent simplicity, it has been recognized that the roulette wheel algorithm does not, in fact, provide a particularly good example of the distribution required.

Implementing Selection Probabilities roulette wheel algorithm

Implementing Selection Probabilities stochastic universal sampling (SUS)

Tournament Selection

Implementing Selection Probabilities Tournament Selection

The tournament selection application for parent selection  works according to the procedure shown in Fig. In the second case, with deterministic tournaments, the least fit members of the k-1 population can never be selected, while if the tournament candidates are selected. with replacement, it is always possible that even the least suitable member of the population is selected as a result of a lucky draw. These properties of tournament selection were characterized in [20, 54], and it was shown [173] that for binary tournaments (k = 2) with parameter p, the expected time for a single high-fit individual to take over the population is the same as for the linear order with s = 2p.

Despite this drawback, tournament selection is perhaps the most commonly used selection operator in modern applications of GAs, due to its extreme simplicity and the fact that the selection pressure is easily controlled by varying the tournament size k.

Implementing Selection Probabilities Tournament Selection-summary

Tournament Selection-summary

Survivor Selection

Age-Based Replacement

Since the number of offspring produced is equal to the number of parents ( = ), each individual exists for only one cycle, and the parents are simply discarded and replaced by the entire set of offspring. This replacement strategy can be easily implemented in GAs with overlapping populations ( < ) and at the other extreme where a single offspring is created and inserted into the population in each cycle. In this case, the queuing strategy is first-in-first-out (FIFO).

An alternative method of age replacement for steady-state GA is to randomly select a replacement parent, which has the same mean effect.

Fitness-Based Replacement

For this reason it is usually used in conjunction with large populations and/or a "no duplicate" policy. This scheme is commonly used in conjunction with age-based and stochastic fitness replacement schemes in an attempt to prevent the loss of the current fittest member of the population. Basically a track of the current fittest member is kept and always kept in the population.

So if it is selected in a group to be replaced, and none of the offspring being inserted into the population has equal or better fitness, it is retained and one of the offspring is discarded.

Survivor Selection-summary

Two Special Cases-summary

Here there is a growth function that constructs the phenotype and uses the genotype as an input parameter. The following example solves the job shop scheduling problem using a heuristic schedule builder and a sequence-based representation that specifies the order in which to attempt to schedule the tasks.

Precedence constrained job shop scheduling GA

Regarding the optimality requirement, the schedule builder uses a local optimal heuristic and always assigns the earliest possible start time for a given operation. To achieve this goal, we define the fitness of an individual, i.e. the genotype, as the duration of the corresponding phenotype (schedule). The difference between them may be their performance in terms of the final solution provided by the GA using them.

The only thing we should be aware of is minimizing the duration of the schedules.

JSSP example: operator comparison

Gambar

Table 3.7. Fitness proportionate (FP) versus linear ranking (LR) selection

Referensi

Dokumen terkait