• Tidak ada hasil yang ditemukan

Minimizing the Maximum Cost

Sequencing for Stochastic Scheduling

6.4 Minimizing the Maximum Cost

In this section, we examine the stochastic counterpart of theTmaxproblem, or its more general form, minimizing the expected maximum cost. (Recall the problem described in Section 3.1.) At the outset, keep in mind that the expected maximum cost is not necessarily identical to the maximum expected cost.

However, minimizing the latter objective appears to be easier. For minimizing the maximum expected cost, Z= max{E[g1(C1)], E[g2(C2)], …, E[gn(Cn)]}, the solution is given by a direct generalization of Theorem 3.1.

Theorem 6.4 When the objective is to minimize the maximum expected cost, job imay be assigned the last position in sequence if E[gi(P)]≤E[gk(P)]

for all jobsk i, wherePdenotes the time to complete all jobs.

Proof. By assumption, processing times do not depend on the job sequence (condition C2), so the distribution of P does not depend on the sequence.

Becausegiis nondecreasing, E[gi(t)] is also nondecreasing int. Therefore, we can replacegi(P) by E[gi(P)] for alli and apply the reasoning in the proof of

Theorem 3.1.

Now consider the following special case:

gj Cj = 1 Cj>dj

= 0 Cj≤dj Here, we have

Pr Cj>dj = Pr jobjis tardy = E gj Cj

This set of relationships proves the following corollary of Theorem 6.4.

Corollary 6.1 The EDD sequence minimizes the maximum tardiness probability.

As another way of looking at this result, suppose we define theservice levelfor jobjas Pr{Cj≤dj}, the probability that the job is on time. Then Corollary 6.1 also states that the EDD sequence maximizes the minimum service level.

To exploit Theorem 6.4, we still need a procedure to implement the result of the theorem as it applies to sequencing, and we can use the sample-based approach. To solve an instance with a given sample, we initially takePas the sum of thenelements in each row. If we calculategi(P) for each job in each row, then the average of these results estimates E[gi(P)]. At the first scheduling stage, we can select the job with the minimal average and schedule it last. At the next scheduling stage,Pis reduced for each row by the processing time of the

6.4 Minimizing the Maximum Cost 139

job that has just been scheduled. To illustrate this procedure in a numerical example, we introduce the following form of the cost functiongi(t):

gj t =δ t−dj aj+bj t−dj

whereaj,bj≥0 andaj+bj> 0;δ(x) = 1 ifx> 0, andδ(x) = 0 otherwise. Equiva- lently,gj(Tj) =δ(Tj)(aj+bjTj). By selecting the parametersaj andbjappropri- ately, we can produce a variety of models. For example, ifaj= 0 andbj> 0 for all j, then the cost is equal to a job’s weighted tardiness. Ifbj= 0 and aj> 0 for allj, then the cost is equal to a job’s weight if it is tardy. The special case bj= 0 andaj= 1 for alljcorresponds to theU-problem.

Example 6.3 Consider a problem containing n = 5 jobs with stochastic processing times. The due date and expected processing time for each job are shown in the following table.

Jobj 1 2 3 4 5

E[pj] 3 4 5 6 7

dj 8 5 15 20 12

Furthermore, the processing time distributions are the same as in Example 6.1, with four equally likely states of nature.

State Jobj 1 2 3 4 5

GG pj 2.6 3.5 3.8 3.2 6.4

GB pj 2.8 3.9 4.4 5.5 6.6

BG pj 3.2 4.1 5.6 6.5 7.4

BB pj 3.4 4.5 6.2 8.8 7.6

In addition, the parameters of the cost functiongj(Tj) =δ(Tj)(aj+bjTj) are given in the following table.

Jobj 1 2 3 4 5

aj 2.0 3.0 4.0 5.0 1.0

bj 0.8 0.4 0.1 0.2 0.3

The analysis for the stochastic data of Example 6.3 is summarized below. Each stage shows the cost for every relevant job and state combination. Once a job is placed in the sequence, it is no longer under consideration for subsequent stages.

6 Sequencing for Stochastic Scheduling 140

The last of these stages is trivial, because only one job remains. The proce- dure is the same at each stage – only the set of jobs under consideration changes. As shown in Table 6.8, the optimal sequence is 2-1-3-5-4. At each stage (that is, in each table), the choice is based on the minimum expected cost in the bottom row, and this value is shown in bold. The maximum of these values (4.8) gives the optimal value of max{E[gi(P)]}. By way of comparison, the optimal sequence in the deterministic counterpart is different (2-1-3-4- 5), and the maximum cost is 4.9. (For that sequence, the value of the maximum expected cost is also 4.9.) But we still don’t know the optimal value of the expected maximum cost.

We can identify the optimal sequence for both expected maximum cost and maximum expected cost in one special case. This case occurs when all the cost functions areorderedsuch that for any two jobs,iandk, and for allt≥0, either gi(t)≥gk(t) or gk(t)≥gi(t). In other words, no two cost functions intersect each other. When the functions are ordered, their order dictates the optimal sequence. We have already encountered a special case of this result in the optimality of EDD forTmaxandLmax. More generally, we have the following dominance property.

Theorem 6.5 Consider two jobs,iandk. Ifgi(t)≥gk(t) for anyt≥0 and the objective is to minimize the expected maximum cost, then there exists an optimal sequence in which jobiprecedes jobk.

Proof. Assume an optimal solution exists in which job k precedes job i and other jobs are possibly sequenced between them. For any set ofnnon- negative processing time realizations, we obtaingi(Ci)≥gk(Ci)≥gk(Ck). The first inequality holds by the hypothesis of the theorem; the second inequality holds because jobi completes after jobkby assumption. If we insert jobk after jobi, lettingCkandCidenote the completion times after this resequen- cing, thenCk =CiandCi <Ci. It follows thatgk(Ck)≤gi(Ci) andgi(Ci)≤gi(Ci), so the objective function cannot increase. Because that is true for any possible set of realizations, the result does not depend on the processing time

distributions.

Corollary 6.2 Consider two jobs,iandk. Ifgi(t)≥gk(t) for anyt≥0 and the objective is to minimize the maximum expected cost, then there exists an optimal sequence in which jobiprecedes jobk.

Corollary 6.2 holds because the proof of the theorem also implies that the maximum expected value cannot be larger in another sequence. But, unless allcost functions are ordered, it would be a mistake to assume that the same sequence minimizes both objective functions. An example helps to underscore this point.

6.4 Minimizing the Maximum Cost 141

Table 6.8

Stage 1 (select job 4 as [5])

Jobj 1 2 3 4 5

GG 11.2 8.8 4.5 0.0 3.3

GB 14.2 10.3 4.8 5.6 4.4

BG 17.0 11.7 5.2 6.4 5.4

BB 20.0 13.2 5.6 7.1 6.6

Expected 15.6 11.0 5.0 4.8 4.9

Stage 2 (select job 5 as [4])

Jobj 1 2 3 4 5

GG 8.6 7.5 4.1 N/A 2.3

GB 9.8 8.1 4.3 N/A 2.7

BG 11.8 9.1 4.5 N/A 3.5

BB 13.0 9.7 4.7 N/A 3.9

Expected 10.8 8.6 4.4 N/A 3.1

Stage 3 (select job 3 as [3])

Jobj 1 2 3 4 5

GG 3.5 5.0 0.0 N/A N/A

GB 4.5 5.4 0.0 N/A N/A

BG 5.9 6.2 0.0 N/A N/A

BB 6.9 6.6 0.0 N/A N/A

Expected 5.2 5.8 0.0 N/A N/A

Stage 4 (select job 1 as [2])

Jobj 1 2 3 4 5

GG 0.0 3.4 N/A N/A N/A

GB 0.0 3.7 N/A N/A N/A

BG 0.0 3.9 N/A N/A N/A

BB 0.0 4.2 N/A N/A N/A

Expected 0.0 3.8 N/A N/A N/A

Stage 5 (select job 2 as [1])

Jobj 1 2 3 4 5

GG N/A 0.0 N/A N/A N/A

GB N/A 0.0 N/A N/A N/A

BG N/A 0.0 N/A N/A N/A

BB N/A 0.0 N/A N/A N/A

Expected N/A 0.0 N/A N/A N/A

6 Sequencing for Stochastic Scheduling 142

Example 6.4 Consider the scheduling of two jobs, 1 and 2, with random processing times and with the following generic cost function parameters.

Jobj 1 2

dj 0 2

aj 0 0

bj 0.7 2

The processing time distributions of the two jobs are independent and identically distributed as follows.

State Jobj 1 2 Probability

A pj 1 1 0.5

B pj 2 2 0.5

There are two possible sequences, 1-2 and 2-1. For each possible sequence, there are four equally likely configurations of the processing times for jobs 1 and 2: AA, AB, BA, and BB. If we make the required calculations, we find the following:

Sequence 1-2 has a maximum expected cost of 2.

Sequence 1-2 has an expected maximum cost of 2.175.

Sequence 2-1 has a maximum expected cost of 2.1.

Sequence 2-1 has an expected maximum cost of 2.1.

Thus, for minimizing the maximum expected cost, the optimal sequence is 1-2, and the optimal value is 2. However, for minimizing the expected maximum cost, the optimal sequence is 2-1, and the optimal value is 2.1. Example 6.4 demonstrates the following proposition.

Proposition 6.1 The sequences that minimize the maximum expected cost and the expected maximum cost are not necessarily identical.

Although the optimal sequences need not be identical, a useful relationship exists between them. We state it here but defer the proof until the next section.

Theorem 6.6 SupposeS1andS2are two sequences (not necessarily distinct) that minimize the maximum expected cost and the expected maximum cost, respectively. Let ZL and ZU denote the maximum expected cost and the expected maximum cost ofS1. ThenZ2, the objective function value ofS2, satis- fiesZL≤Z2≤ZU.

For instance, in Example 6.4,ZL= 2.0≤Z2= 2.1≤ZU= 2.175. The problem of minimizing the expected maximum cost does not satisfy the optimality

6.4 Minimizing the Maximum Cost 143

principle, and therefore we cannot solve it by dynamic programming. This is the case because the objective is neither additive nor can it be transformed into an additive objective. Theorem 6.6, however, allows us to use a branch-and-bound approach, with ZL and ZU providing lower and upper bounds for partial sequences.