• Tidak ada hasil yang ditemukan

Scheduling problem

Dalam dokumen This page intentionally left blank (Halaman 157-163)

Algorithm 6.1 Incremental genetic algorithm

7.3. Scheduling problem

The problem of minimizing of the performance criterion in expectation can be formulated as a combinatorial optimization problem:

minS∈ΩEI∈P zI(S)

whereΩ is the set of feasible solutions and z the criterion to minimize in expectation.

The first publication about flow-shop scheduling with random processing times appeared in 1965. Makino in [MAK 65] developed a sequencing rule to find the schedule that minimizes the expected makespan in a flow-shop with two machines, two jobs, unlimited buffers and exponentially distributed processing times. Talwar [TAL 67], in 1967, extended Makino’s rule to the cases of three and four jobs and speculates upon its use for more than four jobs. In 1973 Cunningham and Dutta [CUN 73] proved Talwar’s conjecture called, in the following text, “Talwar’s rule”.

THEOREM 7.2.– Scheduling jobs in decreasing order of μ1,j − μ2,j minimizes E(Cmax) in a two machine flow-shop with processing times exponentially distributed.

i,j is the rate of the processing time of job Tj,j = 1, . . . , n for machine Mi, i = 1, 2).

State of the art resources about the stochastic flow-shop scheduling problem have been published by: Forst [FOR 84], Righter in Chapter 11 of [SHA 94], Pinedo [PIN 95] and Gourgand et al. [GOU 00]. In this latter reference, like many authors, we noted that the stochastic flow-shop problem was studied from 1965 to 1980, then forsaken by researchers and again studied since the late 1990s. Results presented in the literature correspond to particular cases with restrictive assumptions. Moreover, works mainly deal with the two machine flow-shop problem.

Metaheuristics are combinatorial optimization methods which allow us to deal with high combinatorial optimization problems with complex constraints such as technological constraints, routing constraints, etc. Metaheuristics are general principles and can be used for a large class of problems [HAO 99]. One of its main advantages lies in the possibility of controlling the processing time: the quality of the solution is progressively improved and the user can stop the execution at any time. The simulated annealing algorithm have been proved to converge in probability towards an optimal solution [KIR 83], [HAJ 88]. These methods are based on the concept of a neighboring system. The current solutionS is modified by applying a

156 Flexibility and Robustness in Scheduling

transformation to obtain a new solutionS, called a neighbor of S. The definition of this transformation generates a function which associates the set of neighbor solutions with each solution. This function is called a neighboring system.

The implementation of a metaheuristic (such as simulated annealing) ensures the value of the criterion is computed, at each iteration. To do this, we propose to combine a performance evaluation model and a metaheuristic. The main principle of the combination is described in [CAU 95].

7.3.1. Comparison of two schedules

When using an iterative improvement method, we must compare, at each iteration, two schedulesS and S.S is the current solution and Sis the candidate solution. If the Markovian model can be used,E(z(S)) and E(z(S)) are exactly known and they can be compared without any problem. The scheduleS is better than the scheduleS ifE(z(S))≤ E(z(S)).

Otherwise, we do not know the exact value ofE(z(S)) and E(z(S)) but only an estimationz(S) and z(S) obtained using the simulation model. In this case, the evaluations of zIr(S) and zIr(S) are computed according to the same samples.

When the performance evaluation model is a discrete event simulation model, proposed algorithms compare schedules according to estimations of the expected performance criterion. As these estimations are means, we propose to extend the comparative process by using the confidence interval of the mean in the following way.

AsE(z(S)) is unknown, we compute an estimation v2Sof the variance ofz(S):

vS2 =

NbRep

r=1

zIr(S)− z(S)2

N bRep− 1

v2S being an estimator of the variance whatever the distribution function. The confidence interval ofz(S) with 95% probability is:

z(S)± 2σS/!

N bRep with σS  vS

In the traditional or first version, the scheduleS, chosen in the neighborhood of S, is better than S if the mean z(S) is lower than the mean z(S):

Sis better thanS ifz(S)≤ z(S)

Metaheuristics and Performance Evaluation Models 157

We propose to modify the acceptance criterion of a neighbor by using the confidence interval. Three other versions are proposed.

In the second version, the scheduleS, chosen in the neighborhood ofS, is better thanS if the mean z(S) is lower than the lower bound of the confidence interval of z(S):

Sis better thanS if z(S) < z(S)− 2vS/! N bRep

In the third version, the scheduleS, chosen in the neighborhood ofS, is better thanS if the upper bound of the confidence interval of z(S) is lower than the lower bound of the confidence interval ofz(S):

Sis better thanS if z(S) + 2vS/!

N bRep < z(S)− 2vS/! N bRep

In the fourth version, the scheduleS, chosen in the neighborhood ofS, is better thanS if the mean z(S) is lower than the mean z(S) and if the upper bound of the confidence interval ofz(S) is lower than the upper bound of the confidence interval ofz(S):

Sis better thanS if z(S) + 2vS/!

N bRep < z(S) + 2vS/! N bRep andz(S) < z(S)

The four versions are illustrated in Figure 7.9.

7.3.2. Stochastic descent for the minimization in expectation

The basic algorithm is the stochastic descent (Figure 7.10). The neighbor solution is accepted if it is better (according to section 7.3.1) than the current solution. This algorithm allows us to find generally a local minimum.

The initial solution can be randomly generated or obtained by a heuristic usually used for the deterministic model, for example. The algorithm stops after a given time or a given number of iterations.

7.3.3. Inhomogenous simulated annealing for the minimization in expectation The simulated annealing method (Figure 7.11) was proposed in 1983 by Kirkpatrick [KIR 83] for solving combinatorial optimization problems. The simulated annealing is an extended version of stochastic descent. It looks for solutions

158 Flexibility and Robustness in Scheduling

Version 1 Version 2 Version 3 Version 4

E(z)

z(S)

z(S’) z(S’)

z(S)

z(S’)

z(S) z(S)

z(S’)

Figure 7.9. Comparison of two solutions S and S LetS be an initial solution,

while necessary do

ChooseSrandomly and uniformly in the neighborhoodV of S.

if Sis better thanS then S:= S

end if end while

S is the solution of the method

Figure 7.10. Principle algorithm of the stochastic descent

with low cost while accepting, in a controlled way, solutions which may degrade the objective function. At each iteration, a neighborS ∈ V(S) of the current solution S is randomly chosen. If S is better thanS, then the solution S is systematically accepted. Otherwise, S is accepted with a probability p(Δz, Tk) which depends on the difference Δz = E(z(S))− E(z(S)) (the small deteriorations are more easily accepted) and on the temperatureT (a high temperature corresponds to a great probability to accept deteriorations). The temperature is controlled by a decreasing function which defines a cooling scheme.

Metaheuristics and Performance Evaluation Models 159

LetS be an initial solution, T0an initial temperature, Sbest:=S

while necessary do

ChooseSrandomly and uniformly in the neighborhoodV of S.

if Sis better thanS do S:= S

else if exp

Δz Tk



> random[0, 1] then S:= S

end if end if

if S is strictly better than Sbestthen Sbest:=S

end if

ComputeTk+1

k:= k + 1 end while

Sbestis the solution of the method

Figure 7.11. Principle algorithm of inhomogenous simulated annealing

Hajek proved, in [HAJ 88], that inhomogenous simulated annealing converges in probability towards the set of optimal solutions under certain hypotheses on the temperature and the neighboring system properties.

When the Markovian model can be used,E(z(S)) and E(z(S)) are computed by using theorem 7.1:

Δz = E z(S)

− E z(S)

otherwise,E(z(S)) and E(z(S)) are estimated by using the algorithm described in Figure 7.8:

Δz = z(S)− z(S)

7.3.4. Kangaroo algorithm for the minimization in expectation

The kangaroo algorithm (Figure 7.12) was proposed in [FLE 93] with the aim of avoiding the tuning of the parameter of the simulated annealing and to avoid the convergence towards a local optimum with the stochastic descent. The algorithm allows us, after a stochastic descent without improvement for a given number of

160 Flexibility and Robustness in Scheduling

LetA > 0 be a maximum number of iterations without improvement LetS be an initial solution

k := 0, Sbest:= S while necessary do

if k < A then [Stochastic descent]

ChooseSrandomly and uniformly in the neighborhoodV of S.

if Sis better thanS then k := 0

if Sis better thanSbestthen Sbest:= S

end if S := S else

k := k + 1 end if

else [no improvement since A iterations]

ChooseSrandomly and uniformly in the neighborhoodW of S.

S := S k := 0 end if end while

Sbestis the solution of the method

Figure 7.12. Kangaroo principle algorithm

iterations, to accept any solutions whatever the value of the objective function, and to restart a stochastic descent. Compared to the iterated stochastic descent, this algorithm does not lose the information about the local optimum found.

After a stochastic descent, with a neighboring systemV, if there is no improvement sinceA iterations, any neighbor in the neighborhoodW is accepted. The neighboring systemW is not necessarily the same as V but must respect the accessibility property.

The choice of a neighbor in the neighboring systemW is called a “jump”.

The neighboring systemW can be chosen in many ways. The easiest consists of W = V, a jump then corresponding to the acceptance of a “bad” solution. If W(S) = Ω, ∀S ∈ Ω, the algorithm is the iterated stochastic descent. Another possibility consists of applying the neighboring systemV many times, which amounts to accepting, without condition, many successive transitions. The neighboring system W can also be independently defined.

Contrary to the iterated local search, we obtain the following theoretical result:

Metaheuristics and Performance Evaluation Models 161

THEOREM7.3.– [FLE 93] The kangaroo algorithm converges in probability towards the set of global minimums if and only if the neighboring system W respects the accessibility property.

The proof of the convergence lies in the fact that the kangaroo algorithm builds a Markov chain where any state can lead to an absorbing state and the absorbing states constitute the global optimal set.

7.3.5. Neighboring systems

Classical neighboring systems for the flow-shop problem are permutations and insertions:

Pj,j+1: permutation of two contiguous jobsTjandTj+1randomly chosen, Pj,j: permutation of any two jobsTjandTjrandomly chosen,

Ij,j: insertion of a jobTjrandomly chosen at a new positionjrandomly chosen.

They satisfy the accessibility and the reversibility properties. For the kangaroo algorithm, we use, for the neighboring systemW, five times the neighboring system V. In the following, we use the neighboring system Ij,j which provides better results than the others.

Dalam dokumen This page intentionally left blank (Halaman 157-163)