• Tidak ada hasil yang ditemukan

A Dynamic Programming Approach

Optimization Methods for the Single-machine Problem

3.3 A Dynamic Programming Approach

A regular measure of performance,Z, is a function of job completion times, and when the function is additive, we can write

Z=

n

j= 1

gj Cj

For example, ifZis total tardiness, then gj Cj = max 0,Cj−dj

As another example, ifZis weighted number of late jobs, then gj Cj =wjδ max 0,Cj−dj

WhenZhas an additive form, as in these examples, we can find an optimal sequence with a dynamic programming approach. Dynamic programming is a general optimization technique for making sequential decisions. Here, for example, we have to decide which job comes first, which comes second, and so on. Dynamic programming applies to problems that can be partitioned into subproblems, each involving a subset of the decisions, in such a way that the followingoptimality principleholds: Suppose we have already made the first

3 Optimization Methods for the Single-machine Problem 42

kdecisions (optimally or not), then the remaining (n−k) decisions can be opti- mized by considering only the subproblem that involves them. For example, suppose we wish to find the shortest driving route from San Francisco to New York. If we are contemplating a route that goes through Chicago, then regardless of how we get there, we will have to follow the shortest path from Chicago to New York if the route we are contemplating is to achieve the optimal distance. The optimality principle is satisfied in sequencing (in other words, a sequencing problem can be partitioned appropriately) whenever the objective function is additive.

To apply dynamic programming for our sequencing problem, letJ denote some subset of the jobs, and letp(J) denote the total time required to process the jobs in set J. For convenience, we use (J−j) to denote the set Jwith the elementj removed. Suppose that a sequence has been constructed in which the jobs in setJprecede all other jobs. Let

G J = the minimum cost for the subproblem consisting of the jobs in setJ Next, suppose that jobjis assigned the last position in this subset, so that it completes at timep(J), as shown in Figure 3.3.

Given that jobjcomes last, the value ofG(J) is the sum of two terms, the cost incurred by jobjand the minimum cost incurred by the remaining jobs. This latter term, which we can write as G(J−j), is the optimal value obtained by solving the subproblem involving only the jobs in set (J −j). If we compare all possible jobsjthat could come last in setJand select the best one, we shall find the minimum cost for the setJ. In symbols,

G j = min

j J gjp J +G J−j 3 1

where

G ϕ = 0 3 2

andϕdenotes the empty subset.

Finally, letXdenote the set of all jobs. Because the cost functionGis defined on subsets of jobs, the minimum total cost can be writtenG(X), where

G X = min

j X gjp X +G X−j 3 3

Set J

p(J) Other jobs . . . j

Figure 3.3 The form of a sequence in dynamic programming.

3.3 A Dynamic Programming Approach 43

At each stage, the functionG(J) measures the total cost contributed by the jobs in setJ, when setJoccurs at the beginning of the schedule and is sequenced optimally. The recursion relation (3.1) indicates that in order to calculate the value of Gfor any particular subset of sizek, we first have to know the value ofGfor subsets of size (k−1). Therefore, the procedure begins with the value ofGfor a subset of size zero, from Eq. (3.2). Then, using Eq. (3.1), we can cal- culate the value ofGfor all subsets of size 1, then the value ofGfor all subsets of size 2, and so on. In this manner, the procedure considers ever larger setsJ, ultimately using Eq. (3.3) to determine which job should be scheduled last.

The optimal value of Z is G(X). If we keep track of where minima in Eq. (3.1) occur at each stage, then, after findingG(X), we can reconstruct the optimal sequence.

Example 3.2 Consider the following four-job problem, with the criterion of minimizing total tardiness.

Jobj 1 2 3 4

pj 5 6 9 8

dj 9 7 11 13

The essential dynamic programming calculations are displayed in Table 3.1.

To illustrate these calculations, consider the setJ= {1, 2, 4} that is encoun- tered at Stage 3. For this set p(J) = 19, the total processing time for the jobs in this set. If job 1 comes last in the set, then its tardiness is g1(19) = 10, and for the remaining jobs, G({2, 4}) = 1 from Stage 2. Thus, the total contribution from this set, when job 1 comes last, is 11. An adjacent column indicates that if job 2 comes last, then g2(19) = 12 andG({1, 4}) = 0, totaling 12; and if job 4 comes last,g4(19) = 6 andG({1, 2}) = 2, totaling 8. The minimum of these three totals is 8, which is designated asG(J) in the table; this is achieved when job 4 comes last, as indicated by the column in whichG(J) is shown.

To reconstruct the optimal sequence in the example, note that at Stage 4 the lowest tardiness is achieved when job 3 comes last. Since this leaves jobs 1, 2, and 4 to be sequenced, we examine the set {1, 2, 4} that was evaluated at Stage 3. Here, as we have seen in detail, the calculations show that job 4 should come last in this set; thus job 4 should occupy the next to last position in the optimal sequence. Continuing in this fashion, we construct the optimal sequence 2-1-4- 3 for which the total tardiness isG(X) = 25.

The number of subsets considered by the dynamic programming procedure is 2n, since that is the total number of subsets ofnelements. FindingG(J) for each subsetJinvolves a minimization over all possible jobs that could come last, so the computational effort required for dynamic programming grows in proportion to n2n. In this respect, dynamic programming is typical of many

3 Optimization Methods for the Single-machine Problem 44

Table 3.1

Stage 1

J {1} {2} {3} {4}

p(J) 5 6 9 8

j J 1 2 3 4

gj[p(J)] 0 0 0 0

G(Jj) 0 0 0 0

G(J) 0 0 0 0

Stage 2

J {1, 2} {1, 3} {1, 4} {2, 3} {2, 4} {3, 4}

p(J) 11 14 13 15 14 17

j J 1 2 1 3 1 4 2 3 2 4 3 4

gj[p(J)] 2 4 5 3 4 0 8 4 7 1 6 4

G(Jj) 0 0 0 0 0 0 0 0 0 0 0 0

G(J) 2 3 0 4 1 4

Stage 3

J {1, 2, 3} {1, 2, 4} {1, 3, 4} {2, 3, 4}

p(J) 20 19 22 23

j J 1 2 3 1 2 4 1 3 4 2 3 4

gj[p(J)] 11 13 9 10 12 6 13 11 9 16 12 10

G(Jj) 4 3 2 1 0 2 4 0 3 4 1 4

G(J) 11 8 11 13

Stage 4

J {1, 2, 3, 4}

p(J) 28

j J 1 2 3 4

gj[p(J)] 19 21 17 15

G(Jj) 13 11 8 11

G(J) 25

Optimal sequence: 2-1-4-3 Tj= 25

3.3 A Dynamic Programming Approach 45

general-purpose procedures for combinatorial optimization, in that the effort required to solve the problem grows at an exponential rate with increasing problem size. This trait makes dynamic programming an inefficient procedure for finding optimal sequences in some of the simple problems we have exam- ined. For example, whenFwis the criterion, we could employ dynamic program- ming with

gj t =wjt

Also, whenUis the criterion, we could employ dynamic programming with gj t = 1 ift>dj

= 0 ift≤dj

But in both instances it is computationally more efficient to use the special- ized results developed in Chapter 2. In particular, the Fw-problem and the U-problem can be solved by algorithms that require no more computational effort than is required to sortnnumbers. (The most efficient procedure for sort- ing has a computational requirement that grows at a rate that is asymptotically proportional tonlogn.) On the other hand, for problems in which efficient opti- mizing procedures have not been developed, such as minimizing total weighted tardiness or weighted number of tardy jobs, dynamic programming may be a reasonable approach.

Although the computational demands of dynamic programming grow at an exponential rate with increasing problem size, the approach is still more efficient than complete enumeration of all feasible sequences, for the computa- tional effort of complete enumeration grows with the factorial of the problem size. Because dynamic programming considers certain sequences only indi- rectly, without actually evaluating them explicitly, the technique is sometimes called an implicit enumeration technique. Although it is more efficient than complete enumeration, the fact that its computational requirement exhibits exponential growth places a premium on the ability to curtail the dynamic pro- gramming calculations whenever possible. Such a strategy is described in the next section.

In the exposition above, we organized the dynamic programming calculations by treating the subsets in the order of their size: computingG(J) for all subsets of sizek, then all subsets of size (k+ 1), and so on until reaching the subset of sizen.

Although this might be the most natural way to organize the calculations, other schemes are also possible. In fact, the most convenient way to implement dynamic programming on a computer uses an alternative scheme. The only requirement is that at the time we treat set J, we should already have treated all the subsets ofJ.

For computer implementation, we assign each subset a label. We can think of this label as the sum of the labels of all jobs in the subset, where each job has its own label. To ensure that the label of a subset will tell us unambiguously which

3 Optimization Methods for the Single-machine Problem 46

jobs are contained in the subset, we use binary notation. Specifically, the label for jobkis 2k−1. For example, a 4-job problem contains 16 subsets, including the empty subset, as listed in Table 3.2.

Note that the binary representation allows us to translate sets into labels and labels into sets. For the set {1, 2, 4}, for example, the label is just the sum of the individual job labels 20, 21, 23, or 11. The label 11, when converted to binary notation (1011), reveals that jobs 1, 2, and 4 are members of the subset.

In a computer program, we store the value ofG(J) at a location with an address equal to the label ofJ. In the basic recursion (3.1), we want quick access to the value ofG(J−j). Knowing the label ofJ, we can obtain the label of (J−j) simply by subtracting the label of jobj, or 2j−1. This quick-access lookup for the value of G(J−j) lies at the heart of the calculations. It is imbedded in a minimization loop that determines the choice ofjthat yieldsG(J).

An outer loop provides a scheme for generating all the subsets. Letb(i) take on the value 1 or 0 to reflect that jobiis in or out of the subset. Start withb(i) = 0 for alli. To generate the next set, the loop proceeds as follows:

Find the smallest integerjfor whichb(j) = 0. (If allb(i) = 1, then stop: All sub- sets have been generated.)

Setb(j) = 1.

For alli < j, setb(i) = 0.

Table 3.2

Subset Label Binary

ϕ 0 0000

{1} 1 0001

{2} 2 0010

{1, 2} 3 0011

{3} 4 0100

{1, 3} 5 0101

{2, 3} 6 0110

{1, 2, 3} 7 0111

{4} 8 1000

{1, 4} 9 1001

{2, 4} 10 1010

{1, 2, 4} 11 1011

{3, 4} 12 1100

{1, 3, 4} 13 1101

{2, 3, 4} 14 1110

{1, 2, 3, 4} 15 1111

3.3 A Dynamic Programming Approach 47

In effect, theb-vector contains the binary representation of the label of setJ, and we could add the labels of the jobs inJto compute the label forJ. However, it is simpler to maintain the label of the set being treated by simply adding 2i−1wheneverb(i) is switched from 0 to 1 and subtracting 2i−1whenever b(i) is switched from 1 to 0.

In summary, the computer implementation of dynamic programming requires two efficient devices, a scheme for labeling subsets and an algorithm for generating subsets. The labeling scheme provides efficient access to the value for a previously treated subset, while the generating algorithm ensures that all subsets are treated in a suitable order.