Optimization Methods for the Single-machine Problem
3.7 Summary
Challenging combinatorial optimization problems are encountered even in the simplest of scheduling problems. The previous chapter and Theorem 3.1 dealt with the relatively few situations in which we can easily characterize or con- struct the optimal solution. However, for most tardiness-based criteria, we must call on general-purpose techniques. Nevertheless, the methodologies described in this chapter contain many optional features that can determine their effec- tiveness in a given implementation. Some of these options are reviewed below.
3.7 Summary 65
The dynamic programming approach (Section 3.3) is a highly flexible implicit enumeration strategy that can be applied directly to many single-machine sequencing problems. Although no important design options arise in applying the technique to a given class of problems, an intriguing question is how to develop an efficient computer code for the algorithm. Because the computa- tional demands of dynamic programming grow exponentially with problem size, it is particularly crucial to use an efficient code, even for moderate-sized problems. We discussed a strategy based on a labeling scheme and a set gener- ation algorithm, but other strategies exist. We left open the question of how to identify alternative optima when they occur.
Dominance properties (Section 3.4) provide conditions under which certain potential solutions can be ignored. By exploiting dominance properties, the extensive calculations required by dynamic programming can be curtailed sub- stantially. Based on this strategy, solution algorithms for theTw-problem have been successful on problems of up to 30 jobs (Schrage and Baker 1978). Con- sidering the improvements in CPU performance since these results were obtained, a speedup matched by memory and storage capacity improvements, we might expect dynamic programming to handle about up to roughly 40 jobs on a modern personal computer.
The branch-and-bound approach (Section 3.5) illustrates how implementing an optimization technique can require a good deal of judgment. This judgment must be exercised in the choice of a lower bound calculation, the potential use of an initial trial solution, the incorporation of complicated dominance checks, and the specification of a branching mechanism. In spite of the existence of these options, and the fact that they cannot be evaluated independently, branch-and-bound approaches have met with success in the solution of a wide variety of problems. For example, the T-problem has been attacked with branch-and-bound techniques that have been successful on problems as large as 500 jobs (Szwarc et al. 2001).
No comparable results are available for theTw-problem, however. As it turns out, NP-hard problems belong to two broad classes: NP-hardin the strong sense (or thestrictsense) and NP-hardin the ordinary sense. (Usually, the qualifier is used only for the former.) For the latter category, optimal solutions can be obtained by algorithms that arepseudopolynomial. As the term suggests, pseu- dopolynomial algorithms perform as efficiently as polynomial ones in practice but fail to meet the strict formal definition of a polynomial algorithm. For exam- ple, a pseudopolynomial algorithm may be polynomial in the total processing time but not in the number of processing times, which is typically the relevant measure of problem size. If that total processing time is small enough, the pseu- dopolynomial algorithm will perform efficiently. The existence of a pseudopo- lynomial solution usually implies that we can solve practical instances of the problem without prohibitive computational demands. Conversely, pro- blems for which we can efficiently solve large instances – say, hundreds of jobs–are typically pseudopolynomial. This is the case for theT-problem, which
3 Optimization Methods for the Single-machine Problem 66
has been shown to be pseudopolynomial by Lawler (1977). TheTw-problem, in contrast, is known to be NP-hard in the strong sense.
One complication with both dynamic programming and branch and bound is that no generic solvers exist. Instead, solutions are typically obtained from spe- cially tailored code. In many situations, however, it is also possible to use an IP approach. The advantage of IP is that generic solvers are available and can even be implemented in spreadsheets.
Now, armed with some general optimization capabilities, we can investigate more complex problems in sequencing and scheduling. Ideally, we try to analyze the special structure of the problem and deduce the form of an optimal solution.
However, sequencing and scheduling problems are notoriously difficult, and although we can make some progress with this type of analysis, we will often find its power is limited. When our analysis does not completely solve the prob- lem, we can rely on such general techniques as dynamic programming, branch and bound, or IP.
Exercises
3.1 Consider the problem of minimizing the maximum weighted tardiness.
Describe the optimal sequence in the following special cases.
a) All jobs have the same due date.
b) Weights and due dates are agreeable. In other words,wi> wjimplies di≤dj.
3.2 The following six jobs await sequencing on one machine.
Jobj 1 2 3 4 5 6
Processing timepj 12 2 6 14 8 13
Due datedj 41 4 44 16 35 30
Cost factorcj 3 5 2 4 3 5
When jobjcompletes at timet, the cost function takes the following form:
fj t =cj max 0,t−dj 2
Find the optimal sequence for minimizing the maximum value offj(t).
3.3 Use dynamic programming to minimizeUin the following example.
Jobj 1 2 3 4 5
pj 1 6 4 7 3
dj 2 7 8 13 15
Exercises 67
3.4 Formulate the problem of minimizingTmaxas a dynamic programming problem by writing the appropriate recursion relations.
3.5 Describe how to identify multiple optima (assuming they exist) when using dynamic programming to solve theT-problem.
3.6 Solve the followingT-problem by branch and bound.
Jobj 1 2 3 4
pj 5 6 9 8
dj 9 7 11 13
a) Use Eq. (3.5) to compute bounds.
b) Use Eq. (3.4) to compute bounds.
3.7 Consider the exampleT-problem from Section 3.5.
Jobj 1 2 3 4 5
pj 4 3 7 2 2
dj 5 6 8 8 17
Show which branches of the tree can be fathomed by using condition (a) of Theorem 3.3. Discuss the pros and cons of including this condition in the analysis.
3.8 Prove Theorem 3.3.
Bibliography
Baker, K.R. and Keller, B. (2010). Solving the single-machine tardiness problem using integer programming.Computers & Industrial Engineering59: 730–735.
Elmaghraby, S.E. (1968). The one-machine sequencing problem with delay costs.
Journal of Industrial Engineering19: 105–108.
Emmons, H. (1969). One-machine sequencing to minimize certain functions of job tardiness.Operations Research17: 701–715.
Kanet, J.J. (2007). New precedence theorems for one-machine weighted tardiness.
Mathematics of Operations Research32: 579–588.
Lawler, E.L. (1973). Optimal sequencing of a single machine subject to precedence constraints.Management Science19: 544–546.
3 Optimization Methods for the Single-machine Problem 68
Lawler, E.L. (1977). A“pseudopolynomial”algorithm for sequencing jobs to minimize total tardiness.Annals of Discrete Mathematics1: 331–342.
Mitten, L.G. (1970). Branch and bound methods: general formulation and properties.Operations Research18: 24–34.
Rau, J.G. (1971). Minimizing a function of permutations ofnintegers.Operations Research19: 237–239.
Rinnooy Kan, A.H.G., Lenstra, J.K., and Lageweg, B.J. (1975). Minimizing total costs in one machine scheduling.Operations Research23: 908–927.
Schrage, L.E. and Baker, K.R. (1978). Dynamic programming solution of sequencing problems with precedence constraints.Operations Research26: 444–449.
Shwimer, J. (1972). On then-job, one-machine, sequence-independent scheduling problem with tardiness penalties: a branch and bound solution.Management Science18: 301–313.
Szwarc, W., Grosso, A., and Della Croce, F. (2001). Algorithmic paradoxes of the single machine total tardiness problem.Journal of Scheduling4: 93–104.
Bibliography 69