• Tidak ada hasil yang ditemukan

Genetic Algorithm and Direct Search Toolbox™ 2

N/A
N/A
Protected

Academic year: 2023

Membagikan "Genetic Algorithm and Direct Search Toolbox™ 2"

Copied!
343
0
0

Teks penuh

The software may be used or copied only under the terms of the license agreement. No part of this manual may be photocopied or reproduced in any form without the prior written consent of The MathWorks, Inc.

Product Overview

Writing M-Files for Functions You Want to Optimize

Computing Objective Functions

Maximizing vs. Minimizing

What Is Direct Search?

Performing a Pattern Search

Calling patternsearch at the Command Line

Using the Optimization Tool for Pattern Search

The tool displays the results of the optimization in the Run solver and view result range. Find the minimum of the function” on page 2-8 provides an example of using the optimizer.

Objective Function

Finding the Minimum of the Function

The reason why the optimization ended is that the mesh size became smaller than the acceptable tolerance value for the mesh size, defined by the Mesh Tolerance parameter in the Stop Criteria panel.

Plotting the Objective Function Values and Mesh Sizes

The top plot shows the objective function value of the best point at each iteration. The mesh size increases after each successful iteration and decreases after each unsuccessful one, explained in "How Pattern Search Works" on page 2-15.

Pattern Search Terminology

Patterns

Meshes

Polling

If this happens, the poll is called successful and the point it finds becomes the current point on the next iteration. The algorithm then compares the mesh point with the smallest objective function value to the current point.

Expanding and Contracting

If the algorithm does not find a point that improves the objective function, the tuning is called failed and the current point remains the same at the next iteration. If this mask point has a smaller value than the current point, the poll is successful.

How Pattern Search Works

Context

Successful Polls

The algorithm calculates the objective function at the grid points in the order shown above. The algorithm searches the grid points until it finds one whose value is less than 4.5146, the value of atx1.

An Unsuccessful Poll

At this iteration, none of the mesh points has a smaller objective function value than the value atx3, so the polling failed. In this case, the algorithm does not change the current point on the next iteration.

Displaying the Results at Each Iteration

In the next iteration, the algorithm multiplies the current mesh size by 0.5, the default value of the shrinkage factor in the Meshoptions panel, so that the mesh size in the next iteration is 4. As a result, the calculated point objective function value in iteration 2 , shown below f(x), is less than the value in iteration 1.

More Iterations

With this setting, the pattern search displays information about each iteration at the command line. By default, the pattern search doubles the mesh size after each successful poll and halves it after each unsuccessful poll.

Stopping Conditions for the Pattern Search

The total number of objective function evaluations performed by the algorithm reaches the value of Max function evaluations. The time, in seconds, that the algorithm runs until it reaches the Time Limit value.

Description of the Nonlinear Constraint Solver

The pattern search algorithm minimizes a sequence of the subproblem, which is an approximation of the original problem. A Globally Convergent Augmented Lagrangian Algorithm for Optimization with General Constraints and Simple Bounds”, SIAM Journal on Numerical Analysis, Volume 28, Issue 2, pages.

What Is the Genetic Algorithm?

Performing a Genetic Algorithm Optimization

Calling the Function ga at the Command Line

Using the Optimization Tool

Writing M-files for the functions you want to optimize” on page 1-3 explains how to write this M-file. For the my_fund function described in “Writing M-Files for Functions You Want to Optimize” on page 1-3, you would enter2.

Rastrigin’s Function

The further away the local minimum is from the origin, the larger the value of the function is at that point. Rastrigin's function is often used to test the genetic algorithm because its many local minima make it difficult for standard gradient-based methods.

Finding the Minimum of Rastrigin’s Function

When the algorithm is finished, the Run result solver and view panel appears as shown in the figure below. Note that the value shown is very close to the actual minimum value of the Rastrigin function, which is 0.

Finding the Minimum from the Command Line

Examples of Genetic Algorithms” on page 6-22 describes some ways to obtain a result closer to the true minimum. Note Because the genetic algorithm uses random number generators, the algorithm returns slightly different results each time you run it.

Displaying Plots

The graph also shows the best and average values ​​in the current generation numerically at the top. Typically, the value of best fitness improves rapidly in early generations when individuals are further from the optimum.

Some Genetic Algorithm Terminology

Fitness Functions

Individuals

Populations and Generations

Diversity

Fitness Values and Best Fitness Values

Parents and Children

How the Genetic Algorithm Works

Outline of the Algorithm

Initial Population

Creating the Next Generation

Mutation and Crossover” on page 6-35 explains how to specify the number of children of each type that the algorithm generates and the functions it uses to perform crossover and mutation. The figure below shows the children of the initial population, that is, the second-generation population, and indicates whether they are elite, crossover, or mutation children.

Plots of Later Generations

Stopping Conditions for the Algorithm

Time Limit: The algorithm will stop after running for a certain amount of time in seconds, equal to the Time Limit. Stall generations: The algorithm stops when the weighted average change in the fitness function value relative to the Stall generation is less than the function tolerance.

What Are Simulated Annealing and Threshold Acceptance?

Performing a Simulated Annealing or Threshold Acceptance Optimization

Calling simulannealbnd and threshacceptbnd at the Command Line

This section provides a brief introduction to using the simulated annealing and threshold acceptance using the optimizer. See "Minimizing Use of the Optimizer Tool" on page 4-9 for an example of using the tool with function simulannealbnd.

Description

Because the simulated annealing algorithm performs an extensive random search, the chance of getting stuck in local minima is reduced. Note Because simulated annealing and thresholding use random number generators, you may get different results each time you run these algorithms.

Minimizing at the Command Line

Minimizing Using the Optimization Tool

Some Simulated Annealing and Threshold Acceptance Terminology

Temperature

Annealing Schedule

Reannealing

How Simulated Annealing and Threshold Acceptance Work

MaxIter— the algorithm stops if the number of iterations exceeds this maximum number of iterations. The algorithm stops if the number of feature evaluations exceeds the maximum number of feature evaluations.

Performing a Pattern Search Using the Optimization Tool GUI

File Statement Description

For a complete description of the structure's optimal value fields, see “Plot Function Structure” on page 9-4. Represents the change in the variable to the current value of the objective function, for the current iteration contained in the optimal values. iteration.

Performing a Pattern Search from the Command Line

Calling patternsearch with the Default Options

For example, if there are no bounded constraints or a nonlinear constraint function, use the syntax. To get more information about pattern search performance, you can call the patternsearch syntax.

Setting Options for patternsearch at the Command Line

To change the mesh expansion factor to 3 instead of its default value of 2, enter If you subsequently decide to change another field in the settings structure, e.g. setting PlotFcnsto@psplotmeshsize, which plots the mesh size at each iteration, you must callpsoptimized with the syntax.

Using Options and Problems from the Optimization Tool

Use functionpsoptimset to create an option structure with an afield value that is different from the default. You can also export an entire problem from the optimizer and run it from the command line.

Pattern Search Examples: Setting Options

Poll Method

For example, if the objective function has three independent variables, the GPS Positive 2N basis consists of the following six vectors. For example, if the objective function has three independent variables, the positive GPS basis Np1, consists of the following four vectors.

Complete Poll

Note that the pattern search performs only two evaluations of the objective function in the first iteration, increasing the total number of functions from 1 to 3. Because the last grid point has the lowest objective function value, the pattern search selects it as the current point in the next iteration.

Using a Search Method

The following example illustrates the use of a search method for the problem described in “Example – a Linear Constrained Problem” on page 5-2. This sets the search method to pattern search using the GPS Positive Base Np1 pattern.

Mesh Expansion and Contraction

Note that at iteration 37, which is successful, the mesh size is doubled for the next iteration. When you change the scaling of their axis to logarithmic, the mesh size plot appears as shown in the following figure.

Mesh Accelerator

Problem” on page 5-2, the number of iterations required to reach mesh tolerance is 246, compared to 270 when Accelerator is set to Off. You can see the effect of the mesh accelerator by setting display level to Iterative in Display to command window.

Using Cache

For comparison, the Command Window displays the following lines for the same iteration numbers as Acceleratorset toOn. Note When Cache is set to On, the pattern search may fail to identify a point in the current mesh that improves the objective function because it is within the specified tolerance of a point in the cache.

Setting Tolerances for the Solver

When a linear constraint is active, the pattern search examines points in directions parallel to the linear constraint boundary as well as grid points. In this case, pattern search polls point in a direction parallel to the 10x1– 10x2≤10 boundary line, resulting in a successful poll.

Constrained Minimization Using patternsearch

The pattern search solver assumes that the objective function takes an input x, where x has as many elements as the number of variables in the problem. The pattern search solver assumes that the constraint function takes one input x, where x has as many elements as the number of variables in the problem.

Vectorizing the Objective and Constraint Functions

If the initial pointx0 is a column vector of sizem, the objective function takes each column of the matrix as a point in the pattern and returns a vector of sizen. If the initial pointx0 is a row vector of sizem, the objective function takes each row of the matrix as a point in the pattern and returns a vector of sizem.

Parallel Computing with Pattern Search

Parallel Pattern Search

Using Parallel Computing with patternsearch

In particular, ifnetwork_file_pathis the network path for your target or restriction functions, enter. Once your parallel computing environment is established, appropriate solvers automatically apply parallel computing when called with options.

Parallel Search Function

Implementation Issues in Parallel Pattern Search

Parallel Computing Considerations

Genetic Algorithm Optimizations Using the Optimization Tool GUI

Introduction

The plot below shows the coordinates of the point with the best value in the current generation. The change variable is the best score in the previous generation minus the best score in the current generation.

Reproducing Your Results

Normally, you should leave Use random states from previous runs unchecked to get the benefit of randomness in the genetic algorithm. The following example shows how to export a problem so that when you import it and click Start, the genetic algorithm resumes from the final population saved with the exported problem.

Using the Genetic Algorithm from the Command Line

Running ga with the Default Options

To get more information about the performance of the genetic algorithm, you can call with the syntax.

Setting Options for ga at the Command Line

If you export the default options in the Optimization Tool, the resulting structure options will have the same settings as the default structure returned by the command. NoteIf you do not need to reproduce your results, leave the statesrandandrandn unset to take advantage of the randomness in the genetic algorithm.

Resuming ga from the Final Population of a Previous Run

You can reproduce your run in the Optimization Tool by checking the Use random states from previous run box in the Run solver and view results section. If you want to run a problem stored with the final population, but would rather not use the initial population, simply delete or otherwise change the initial population in the Options > Population panel.

Running ga from an M-File

You can get a smoother plot ofvalas a function of the crossover fraction by running 20 times and averaging the values ​​ofval for each crossover fraction.

Genetic Algorithm Examples

Improving Your Results

Population Diversity

The following example shows how the initial range affects the performance of the genetic algorithm. The genetic algorithm returns the best fitness function value of approximately 2 and displays the graphs in the following image.

Fitness Scaling

The following figure compares the scaled values ​​for a population of size 20 with the number of parents corresponding to 32 using rank and vertex scaling. Because top scaling restricts parents to the fittest individuals, it creates less diverse populations than rank scaling.

Selection

In the second step, the selection function selects additional parents using the fractions of the scaled values, as in stochastic uniform selection. Note that if the fractions of the scaled values ​​are all equal to 0, as can happen when using Top Scaling, the selection is completely deterministic.

Reproduction Options

The function lays out a line in sections, the lengths of which are proportional to the fractional part of the scaled value of the individuals, and moves along the line in equal steps to select the parents.

Mutation and Crossover

Crossover fraction, inReproductionoptions, indicates the fraction of the population, other than elite children, who are crossover children. There are 18 individuals other than elite children, so the algorithm rounds to 14 to get the number of crossover children.

Setting the Amount of Mutation

For example, if Shrink has the default value of 1, the amount of mutation is reduced to 0 at the last step. As the amount of mutation decreases, the average distance between individuals also decreases, which is approximately 0 at the last generation.

Setting the Crossover Fraction

The algorithm generates the best individual that can use these genes at generation number 8, where the best fitness graph becomes level. Since the algorithm cannot improve the best fit value after generation 8, it stalls after another 50 generations because the Stall band is set to 50.

Comparing Results for Varying Crossover Fractions

In this case, the random changes applied by the algorithm never improve the fitness value of the best individual at the first generation. One way to make the genetic algorithm examine a wider range of points—that is, to increase the diversity of the populations—is to increase the initial range.

Using a Hybrid Function

Setting the Maximum Number of Generations

Note that the algorithm stops at around generation number 170 – that is, there is no immediate improvement in the fitness function after generation 170. If you restore the Stall generation to the default value of 50, the algorithm would terminate at approximately generation number 230.

Vectorizing the Fitness Function

The following comparison, run on the command line, shows the improvement in speed with Vectorizeset toOn. If there are nonlinear constraints, the objective function and the nonlinear constraints must all be vectorized for the algorithm to compute in a vectorized fashion.

Constrained Minimization Using ga

Optimization aborted: current tolerance on f(x) 1e-007 is less than chance.TolFun and constraint violation is less than chance.TolCon. Optimization aborted: current tolerance on f(x) 1e-007 is less than chance.TolFun and constraint violation is less than chance.TolCon.

Referensi

Dokumen terkait

kelompok kontrol yang tidak mendapatkan terapi sehingga memunculkan motivasi dalam diri kelompok kontrol untuk lebih baik dari pada kelompok eksperimen yang

9: The Design of Data Input Page Descriptions: A13 = The function of Home is to show the home page menu, A14 = The function of Delete is to show the Delete menu of batik and guest