• Tidak ada hasil yang ditemukan

Complexity Analysis of Iterated Local Search Algorithm in Experimental Domain for Optimizing Latin Hypercube Designs

N/A
N/A
Protected

Academic year: 2023

Membagikan "Complexity Analysis of Iterated Local Search Algorithm in Experimental Domain for Optimizing Latin Hypercube Designs"

Copied!
80
0
0

Teks penuh

This confirms that the thesis submitted by Parimal Mridha entitled "Complexity Analysis of an Iterated Local Search Algorithm in an Experimental Domain for the Optimization of Latin Hypercube Designs" has been approved by a committee of examiners in partial fulfillment of the requirements for the Master of Philosophy in the Department of Mathematics, University of Engineering and Technology Khulna, Khulna, Bangladesh, August 2013. I would like to express my deep gratitude to the authority of Khulna University of Engineering and Technology for accepting me in M. Especially due to the need of real-time solution, the time complexity of ILS approaches is analyzed.

After the analysis, a model of the time complexity of the algorithms was developed for two optimal criteria, namely Opt (D1, J1 ) and Opt(1').

Table  Caption of the Table  Page
Table Caption of the Table Page

Background

The one-dimensional design property ensures that there is little redundancy of design points when some of the factors have a relatively negligible effect (sparsity principle). Unfortunately, randomly generated LHDs almost always show poor space-filling properties or/and the factors are highly correlated. On the other hand, maximum distance designs, proposed by [Jonson et al. 1990)] have very good space-filling properties, but often do not have good projection properties below Euclidean or Rectangular distance.

To overcome this shortcoming, Morris and Mitchell (1997) suggested looking for maximal LHDs when searching for “optimal” designs.

Literature Review

Much improved values ​​(maximum LHD values) are obtained with the ILS approach proposed by Grosso et al. It is worth mentioning here that in a 2010-2011 project provided by CASR (Committee for Advanced Studies and Research), KUET, the algorithm (Grosso et al. 2009) is partially implemented in a PC Windows environment. Also the authors, in [Jamali et al. 2012)], analyzed the multicollinearity of the maximin LHD obtained by the ILS approach.

As in Grosso et al. 2009) used the ILS approach for k :!~ 10 and obtained outstanding results which are updated on the popular web portal www.spaceflhlingdesigns.nl.

Structure of the Thesis

Introduction

Some Definitions

Constant Time Complexity

Algorithms whose solutions are independent of the size of the problem's inputs are said to have constant complexity. The complexity of using the formal model to predict aspects of the system under study given certain conditions;. Note that an algorithm can take different amounts of time on inputs of the same size.

More usefully, we define the size of the input in a problem-dependent way.

Big-O Notation

For example, if we are dealing with algorithms for multiplying square matrices, we can express the input size as the dimension of the matrix (i.e., the number of columns or rows), or we can express the input size as the number of entries in the matrix. One conclusion from this discussion is that to properly interpret the function that describes the time complexity of an algorithm, we need to be clear about how exactly we measure the size of inputs [Nicolas, (2007)]. ii) Step: A step of the algorithm can be precisely defined if we fix a specific machine on which the algorithm is to be executed. For example, if we are using a machine with a Pentium processor, we can define a step as one Pentium instruction.

This is not the only reasonable choice: different instructions take different amounts of time, so a more refined definition might be that a step is one processor clock cycle. In general, however, we want to analyze the time complexity of an algorithm without being restricted to any particular machine. In general, we will consider a step to be anything that we can reasonably expect a computer to do in a given time.

In mathematics, it is often important to get a handle on the error term of an approximation. For the formal definition, assume fixed) and g(x) are two functions defined on a subset of the real numbers. The above list is useful for the following fact: if a functionj(n) is a set of functions, one of which grows faster than the others, then the fastest growing one determines the order of fin).

A caveat here: the number of summands must be constant and may not depend on n. This notation can also be used with multiple variables and with other expressions on the right side of the equals sign.

Figure 2.1: Graphical representation of some functions in point of time complexity.
Figure 2.1: Graphical representation of some functions in point of time complexity.

Introduction

Iterated Local Search

Iterated Local Search is a metaheuristic designed to nest another, problem-specific, local search as if it were a black box. This allows Iterated Local Search to retain a more general structure than other metaheuristics currently in practice. There are two main points that make an algorithm an iterated local search: (i) there must be a single chain that is followed (this then rules out population-based algorithms); (ii) the search for better solutions takes place in a reduced space defined by the output of a black box heuristic.

In practice, local search has been the most commonly used embedded heuristic, but in reality any optimizer can be used, deterministic or not. The purpose of this review is to give a detailed description of iterative local search and to show where it stands in terms of performance. Local search applied to the initial solution therefore gives the starting point s *of the step in the set S*.

This iterative local search procedure should lead to good biased sampling as long as the perturbations are neither too small nor too large. In practice, much of the potential complexity of ILS is hidden in the historical dependency. In general, the local search should not be able to undo the disturbance; otherwise you will fall back to the local optimum you have just visited.

If this is the case, the local search algorithm would take less time to reach the next locally optimal value. An obvious disadvantage of the Opt( b) criterion, if we are interested in maximin values ​​(maximum D1 value), is that LHDs with smaller (better) Ø may have worse (smaller) D1, i.e.

Fig:  (c)  Dl(XM)=8, JI(XM)2
Fig: (c) Dl(XM)=8, JI(XM)2

ILS Heuristic for Maximin LHD

Local Search Procedure (L)

To define a local search procedure (Ls), we need to define a notion of neighborhood of a solution. It is observed that the effect of CP-based local search and RP-based local search is not significant [Jamali (2009)]. The definition of Rowwise-Pairwise Critical Local Moves (we call it LMp,Dl) as follows.

The algorithm sequentially selects two points (rows) such that at least one of them is a critical point, and then exchanges two corresponding elements (factors) of the selected pair. Note that if Opt(Di, J1) is the optimality criterion, it makes perfect sense to avoid considering pairs x1 and x such that 1(X) fl 1x1, x} = 0, since any swap involves two non-critical points cannot improve the D1 value of the current LHD. The RP-local feature for the Opt( 0) optimality criterion is denoted by LMppØand is also defined as Eq. 4.8), the only difference is that we drop the requirement that at least one point must be critical.

We now illustrate local RP-based displacements by considering a randomly generated initial design A: (N,k) = (7,2) (see Figure 4.2(a)). For the BI acceptance rule, the best-improving neighbor is searched throughout the neighborhood of the current solution. Basically, a perturbation is similar to a local shift, except that it must somehow be less local, or, more precisely, a shift within a neighborhood larger than that used in the local search.

2 because otherwise the concern would be a special case of local motion used in the local search procedure. Note that SCOE only slightly modifies the current LHD X*, but this exactly follows the spirit of the ILS, where the concern is to keep large parts of the current solution unchanged and not completely break its structure.

Fig: (a) D1(X2, J1(X3  Fig: (b) Dl(Xb)=2, JI Xb)=1  Fig: (c) D1(X)=8.  J1(X)=4
Fig: (a) D1(X2, J1(X3 Fig: (b) Dl(Xb)=2, JI Xb)=1 Fig: (c) D1(X)=8. J1(X)=4

Introduction

We notice that in most cases the number of critical points is I or 2, and only occasionally it is greater than 6. Thus, we can claim with confidence that the size of the problem has practically no influence on the number of critical points in each configuration visited. . Note that to find the impact in the local search phase, WL is averaged over the corresponding number of perturbations performed.

To arrive at an empirical evaluation of the number of operations required by an ILS run, we still need to evaluate the number of perturbations (and thus local searches) performed during an ILS run. From the experiments (see Figure 4.11) we notice that there is a significant impact of N on the number of perturbations invoked for all k considered. It appears that the number of called disturbances is somewhat logarithmic with respect to N (see the dot curve in Figure 4.11).

Then we need to derive some formula for the number of times the While-Loop is executed during the local search, i.e. To derive the total time complexity of ILS(q), we still need to derive a formula for the number of perturbations (ie, the number of local searches) performed during each ILS run. From the experiments (see Figure 5.9), we notice that there is a significant influence of N on the number of disturbances.

From the experiments, we note that despite a peak at k = 6, there is no significant effect of k on the number of perturbations invoked during a run (see the bar chart in Figure 5.10). As for the number of runs for each LHD we considered are given in Table 6.1. There is another observed in Table 6.2(a) that the increase in the number of trials does not significantly increase the maximum LHDs values.

It is observed that when the number of design points (i.e. N) is small, the maximum correlation coefficient is high.

Figure 4.1: The percentage of pairs involving and not involving critical points
Figure 4.1: The percentage of pairs involving and not involving critical points

Gambar

Figure  Caption of the Figure  Page
Figure 2.2: Schematic view of the field of complexity
Fig:  (c)  Dl(XM)=8, JI(XM)2
Table 3.1: Some well know approaches as well as optimal criterion for optimal  experimental designs
+7

Referensi

Dokumen terkait

vii LIST OF FIGURES Figure 1 Location of the ADAM-15 gene on chromosome 1 Figure 2 Domain structure of ADAMs compared to snake venom metalloproteinases SVMP Figure 3 Guanine

xii List of Figures Items Figure Caption Page Figure 1.1 Heat transfer system 2 Figure 1.2 Nanoparticles 3 Figure 1.3 Nanofluids mechanism 4 Figure 1.4 Different types of