• Tidak ada hasil yang ditemukan

Meeting Service Level Targets

Safe Scheduling

7.2 Meeting Service Level Targets

7.2.1 Sample-based Analysis

In this section, we consider setting due dates to meet a given set of service-level targets. Recall from Chapter 2 (Section 2.5.1) that when we can set due dates, we generally want them to be as tight as possible–that is, we wish to minimize

D=

n

j= 1

dj 7 1

while maintaining stochastic feasibility. As an example, we consider a model with a discrete probability distribution for each of the processing times. We use the sample-based approach introduced in Chapter 6.

Example 7.1 Consider a problem containingn= 5 jobs, each with its own service-level target. The stochastic nature of processing times is represented by 10 distinct states of nature, and for each state the processing time of each job is known. The given information is shown in the following table. The problem is to find due dates for the jobs that are as tight as possible while meeting each job’s service-level target.

Jobj 1 2 3 4 5

E(pj) 3.00 4.00 4.02 4.04 5.00

Service level 90% 70% 60% 80% 60%

State 1 2.60 2.55 3.50 1.05 3.90

State 2 3.12 4.75 4.20 3.95 5.00

State 3 2.76 3.03 3.70 3.15 4.30

State 4 3.18 5.05 4.35 4.55 5.40

State 5 3.28 5.00 4.30 6.35 5.90

State 6 2.68 2.61 3.60 1.15 4.15

State 7 2.86 2.86 3.80 3.35 4.65

State 8 3.26 4.90 4.25 5.95 5.75

State 9 2.94 4.15 4.10 3.75 4.80

State 10 3.32 5.10 4.40 7.15 6.15

Suppose we fix the job sequence by taking the jobs in nondecreasing order of their expected processing times, or 1-2-3-4-5. (This is the SEPT sequence, as defined in Chapter 6.) Knowledge of the job sequence allows us to calculate the completion time for each job in sequence for each of the 10 states. For

7.2 Meeting Service Level Targets 169

jobj, and for any particular state, the sum of the firstjprocessing times in the corresponding row of the sample yieldsCj, as shown in Figure 7.1. In general, let Cj(k) denote the value of thekth element in this list when it is sorted smallest to largest. For the purpose of exposition, we ignore the possibility of ties in the sorted list. Now suppose that we set dj=Cj(k), for some integerk. Then jobj is early in (k−1) rows; it is exactly on time in one row (namely, rowk); and it is tardy in the remaining (r−k) rows. In general, even with ties, the serv- ice-level constraint is satisfied by setting

dj=Cj bjr 7 2

where x denotes the smallest integer greater than or equal tox. In Eq. (7.2) this integer is the rank of the completion time among the sorted values for jobj.

Furthermore, any earlier due date violates stochastic feasibility, and any later due date is not as tight as possible.

The calculations are summarized in Figure 7.1, which shows the completion times for each state. For example, to calculate the due date for job 4, its required service level of 80% leads us to the eighth-ranked completion time (in ascending order) out of the 10 in the column of completion times corresponding to job 4.

The eighth smallest value is 18.36. The due dates corresponding to the other service-level targets are shown in the table, leading toD= 62.89.

As this example illustrates, we can determine the tightest due dates that meet service-level targets provided we already know the job sequence. The sample- based approach allows us to handle cases in which the processing times are not

Figure 7.1 Detailed calculations for Example 7.1.

7 Safe Scheduling 170

independent. (They are highly correlated in Example 7.1.) Furthermore, when the sample is produced by simulation, it can be drawn from any distribution, thus making the sample-based approach widely applicable. However, the more challenging problem is to find the optimal sequence, a problem that we discuss later. For the time being, we can easily imagine some logical heuristic rules for sequencing the jobs.

As noted, the sequence we evaluated corresponds to SEPT. This rule minimizes expected total flowtime (Theorem 6.1). We might observe, however, that SEPT uses information about means but ignores information about variability and serv- ice-level targets, so it omits some potentially relevant information. With a bit more calculation effort, we can sequence the jobs using a so-calledgreedy algo- rithm(see Chapter 2, Section 4.2) by augmenting a partial sequence with the job that produces the smallest increment to the objective function. For minimizingD subject to stochastic feasibility, this is equivalent to selecting the unscheduled job with the earliest due date (EDD) if it were to come next. We refer here to the opti- mal due date for the job, recognizing that this value depends on which jobs have previously been scheduled. For this reason, the EDD rule is a dynamic rule. In Example 7.1, EDD yields the sequence 1-3-5-2-4, which achievesD= 65.01.

A sequence is calledadjacent pairwise interchange (API) stableif it is optimal in its API neighborhood. API stability is a necessary condition for optimality, but the SEPT sequence may not be API stable. In our example, the SEPT sequence, 1-2-3-4-5, is not API stable, but we can achieve an improvement by finding an API-stable sequence starting with SEPT. The best possible improvement is obtained by interchanging jobs 2 and 3, thus obtaining 1-3- 2-4-5, with an objective function value of 62.39. With this sequence as the new seed, we achieve a further improvement by interchanging jobs 2 and 4, to obtain 1-3-4-2-5, with an objective function value of 62.21. Finally, interchan- ging jobs 3 and 4 yields the API-stable sequence 1-4-3-2-5, with an objective function value of 61.91. As it happens, this sequence is optimal.

As the example demonstrates, performing an API search starting with SEPT can outperform the greedy heuristic. But the simpler greedy heuristic yields good results in one important special case, as stated in the following property.

Property 7.1 When all service-level targets are equal (bj=b), the greedy heu- ristic yields an API-stable sequence.

Proof. At any stage, let jobibe the one selected next by the greedy heuristic (that is,diis the minimal possible due date in the first unscheduled position), and suppose job k follows directly. If we interchange B-i-k to B-k-i (where B is the set of all previously scheduled jobs),dkis at least as large as the previous di, but the newdiis equal to the formerdk(because the completion time dis- tribution of the second job is the same and so is its service-level target). Hence, the sum of the two is minimized by keeping jobiahead of jobk, for anyk. This is true for all positions, so the greedy sequence is API stable.

7.2 Meeting Service Level Targets 171

7.2.2 The Normal Model

For the time being, we assume that the processing times are independent ran- dom variables and that, in particular, the processing time for jobjfollows a nor- mal distribution with meanμjand standard deviationσj. (See Appendix A.1.3 for background on the normal distribution.) We use the normal because it is trac- table, familiar, and plausible for many scheduling applications.

The assumption of normal processing times leads to a convenient result: In any sequence, the completion time of jobjalso follows a normal distribution because the sum of normal random variables is normally distributed. Using notation, let Bjdenote the “before set” or the set of jobs preceding jobj in the schedule. Then Cj follows a normal distribution with mean E[Cj]

= i Bjμi+μj and variance V[Cj] = i Bjσ2i+σ2j. To simplify the notation, we write μBj for i Bjμi and σ2Bj for i Bjσ2i. Once we know the properties of the random variableCj(which depends on the job sequence), we can deter- mine the optimal choice ofdj.

To represent the service-level requirement in the normal case, letzjrepresent the standard normal variate at which the cumulative distribution function (cdf ) equalsbj. In standard notation,Φ(zj) =bj. Then the appropriate choice for the due date of jobjis

dj=μBj+μj+zj σ2Bj+σ2j 1 2

7 3 In this expression, the optimal due datedjdepends on the previous jobs in sequence via the setBj, and the objective function (7.1) can be expressed as

D=

n

j= 1

μBj+μj+zj σ2Bj+σ2j

1 2 7 4

We can interpret this expression as the sum of two components: expected total flowtime and total safety time. This interpretation applies to any distribution, but Eq. (7.4) is specific to independent normal processing times.

Example 7.2 Consider a problem containingn= 5 jobs with stochastic pro- cessing times. The processing times are independent, each drawn from a normal distribution with the mean and standard deviation shown in the table, and each job has been assigned a service level, also shown in the table.

Jobj 1 2 3 4 5

E(pj) 20 21 22 23 24

σj 4.0 2.0 3.5 4.5 4.0

bj 90% 80% 75% 80% 70%

7 Safe Scheduling 172

Example 7.2 contains five jobs with given service-level targets and illustrates the necessary calculations. Suppose we fix the job sequence as 1-2-3-4-5. Then the optimal due dates can be determined individually for each job. The relevant calculations are shown in Figure 7.2, as they might be calculated on a spread- sheet, and we elaborate on the details for job 4.

Job 4 has a mean completion time equal to the sum of the first four mean pro- cessing times, or 86. To find the variance of its completion time, we sum the variances of the first four jobs, obtaining 52.5. The corresponding standard devi- ation is the square root of this figure, or about 7.25. Job 4 has a service-level target of 80%, corresponding to az-value of 0.842 in the standard normal dis- tribution. Thus, using the formula in Eq. (7.3), we can meet the service level by setting d4= 86 + 0.842 (7.25) = 92.1. Similar calculations apply for the other jobs. As Figure 7.1 shows, the sum of the five optimally calculated due dates isD= 343.2.

Thus, we can make the calculations for the normal case using spreadsheet technology, provided we already know the job sequence. Once again, we can explore heuristic rules for finding a good job sequence.

By definition, if our current solution is not API stable, an API neighborhood search will improve the schedule. As shown in Figure 7.2, the SEPT rule, which corresponds to the sequence 1-2-3-4-5, achievesD= 343.2. The EDD rule–that is, the greedy heuristic – yields the sequence 2-3-5-1-4, which achieves

Figure 7.2 Detailed calculations for Example 7.2.

7.2 Meeting Service Level Targets 173

D= 351.2. Accordingly, we select 1-2-3-4-5 as the seed for our API search. By interchanging jobs 1 and 2, we reduce the objective function value to 342.7. This sequence is API stable and turns out to be optimal as well.

In our example, SEPT was not optimal but much better than the greedy heu- ristic. For largen, a special property applies. A heuristic isasymptotically opti- mal if, asngrows large, the relative difference between the heuristic solution and the optimum becomes negligible. More formally, letf(S) denote the objec- tive function value with the optimal sequence,S, and letf(SH) be the value asso- ciated with a heuristic. We say that the heuristic is asymptotically optimal if [f(SH)−f(S)]/f(S) approaches 0 asnapproaches∞. That turns out to be the case for the SEPT heuristic, no matter which distribution applies, as long as pro- cessing times are independent. To understand why SEPT is asymptotically opti- mal, recall thatDconsists of the expected total flowtime and the sum of all safety times. Under the independence assumption, and if no single job can dominate too many other jobs combined, then asngrows large, expected total flowtime grows at a rate ofO(n2), whereas total safety time grows a rate ofO(n3/2). There- fore, total safety time becomes negligible compared with expected total flow- time, which is minimized by SEPT.

We can conclude that, when combining SEPT with an API search, it is enough to perform the search on the first several jobs. Due to asymptotic optimality, we don’t need to worry about the other jobs: Asngrows large, SEPT is already an excellent sequence for the last jobs even without API. Extensive numerical expe- rience shows that following SEPT by an API neighborhood search on the first few jobs yields the optimal solution more often than not. As an added touch, we recommend breaking ties by smallest variance and further ties by highest serv- ice-level target. Doing so is likely to impose the correct sequence between jobs with the same mean.