Parallel Machine Models (Deterministic)
Algorithm 5.2.3 Minimizing Makespan with Preemptions) Step 1
5.3 The Total Completion Time without Preemptions
Considerm machines in parallel and n jobs. Recall that p1 ≥ · · · ≥ pn. The objective to be minimized is the total unweighted completion time
Cj. From Theorem3.1.1 it follows that for a single machine theShortest Processing Time first (SPT)rule minimizes the total completion time. This single machine result can also be shown in a different way fairly easily.
Let p(j) denote the processing time of the job in the jth position in the sequence. The total completion time can then be expressed as
Cj =np(1)+ (n−1)p(2)+· · ·+ 2p(n−1)+p(n).
This implies that there are n coefficients n, n−1, . . . ,1 to be assigned to n different processing times. The processing times have to be assigned in such a way that the sum of the products is minimized. From the elementary Hardy, Littlewood and Polya inequality as well as common sense it follows that the highest coefficient,n, is assigned the smallest processing time, pn, the second highest coefficient,n−1, is assigned the second smallest processing time,pn−1, and so on. This implies that SPT is optimal.
This type of argument can be extended to the parallel machine setting as well.
Theorem 5.3.1. The SPT rule is optimal forP m||
Cj.
Proof. In the case of parallel machines there are nmcoefficients to which pro- cessing times can be assigned. These coefficients arem ns,m(n−1)s,. . .,m ones. The processing times have to be assigned to a subset of these coefficients in order to minimize the sum of the products. Assume thatn/mis an integer. If it is not an integer add a number of dummy jobs with zero processing times so thatn/mis integer (adding jobs with zero processing times does not change the problem; these jobs would be instantaneously processed at time zero and would not contribute to the objective function). It is easy to see, in a similar manner as above, that the set ofmlongest processing times have to be assigned to the m ones, the set of second m longest processing times have to be assigned to themtwos, and so on. This results in themlongest jobs each being processed on different machines and so on. That this class of schedules includes SPT can be shown as follows. According to the SPT schedule the smallest job has to go on machine 1 at time zero, the second smallest one on machine 2, and so on;
the (m+ 1)th smallest job follows the smallest job on machine 1, the (m+ 2)th smallest job follows the second smallest on machine 2, and so on. It is easy to verify that the SPT schedule corresponds to an optimal assignment of jobs to
coefficients.
Fromthe proof of the theoremit is clear that the SPT schedule is not the only schedule that is optimal. Many more schedules also minimize the total completion time. The class of schedules that minimize the total completion time turns out to be fairly easy to characterize (see Exercise 5.21).
As pointed out in the previous chapter the more general WSPT rule mini- mizes the totalweighted completion time in the case of a single machine. Un- fortunately, this result cannot be generalized to parallel machines, as shown in the following example.
Example 5.3.2 (Application of the WSPT Rule) Consider two machines and three jobs.
jobs 1 2 3 pj 1 1 3 wj 1 1 3
Scheduling jobs 1 and 2 at time zero and job 3 at time 1 results in a total weighted completion time of 14, while scheduling job 3 at time zero and jobs 1 and 2 on the other machine results in a total weighted completion time of 12. Clearly, with this set of data any schedule may be considered to be WSPT. However, making the weights of jobs 1 and 2 equal to 1−4 shows that WSPT does not necessarily yield an optimal schedule. ||
It has been shown in the literature that the WSPT heuristic is nevertheless a very good heuristic for the total weighted completion time on parallel machines.
A worst case analysis of this heuristic yields the lower bound wjCj(W SP T)
wjCj(OP T) < 1 2(1 +√
2).
What happens now in the case of precedence constraints? The problem P m | prec |
Cj is known to be strongly NP-hard in the case of arbitrary precedence constraints. However, the special case with all processing times equal to 1 and precedence constraints that take the formof an outtree can be solved in polynomial time. In this special case the Critical Path rule again minimizes the total completion time.
Theorem 5.3.3. The CP rule is optimal forP m|pj = 1, outtree| Cj. Proof. Up to some integer point in time, sayt1, the number of schedulable jobs is less than or equal to the number of machines. Under the optimal schedule, at each point in time beforet1, all schedulable jobs have to be put on the machines.
Such actions are in accordance with the CP rule. Time t1 is the first point in time when the number of schedulable jobs is strictly larger thanm. There are at leastm+ 1 jobs available for processing and each one of these jobs is at the head of a subtree that includes a string of a given length.
The proof that applying CP fromt1 is optimal is by contradiction. Suppose that after timet1another rule is optimal. This rule must, at least once, prescribe
132 5Parallel Machine Models (Deterministic) an action that is not according to CP. Consider the last point in time, sayt2, at which this rule prescribes an action not according to CP. So att2 there arem jobs, that arenot heading them longest strings, assigned to them machines;
from t2+ 1 the CP rule is applied. Call the schedule from t2 onwards CP. It suffices to show that applying CP fromt2 onwards results in a schedule that is at least as good.
Consider under CP the longest string headed by a job that isnot assigned att2, say string 1, and the shortest string headed by a job that is assigned at t2, say string 2. The job at the head of string 1 has to start its processing under CPat timet2+ 1. LetC1 andC2 denote the completion times of the last jobs of strings 1 and 2, respectively, under CP. Under CP C1 ≥C2. It is clear that under CPallmmachines have to be busy at least up toC2−1. IfC1 ≥C2+ 1 and there are machines idle beforeC1 −1, the application of CP att2 results in less idle time and a smaller total completion time. Under CP the last job of string 1 is completed one time unit earlier, yielding one more completed job at or before C1 −1. In other cases the total completion time under CP is equal to the total completion time under CP. This implies that CP is optimal from t2 on. As there is not then a last time for a deviation from CP, the CP rule is
optimal.
In contrast to the makespan objective the CP rule is, somewhat surprisingly, not necessarily optimal for intrees. Counterexamples can be found easily (see Exercise 5.24).
Consider the problemP m | pj = 1, Mj |
Cj. Again, if the Mj sets are nested the Least Flexible Job first rule can be shown to be optimal.
Theorem 5.3.4. The LFJ rule is optimal for P m | pj = 1, Mj | Cj
when theMj sets are nested.
Proof. The proof is similar to the proof of Theorem 5.1.8.
The previous model is a special case ofRm||
Cj. As stated in Chapter 2, the machines in theRmenvironment are entirely unrelated. That is, machine 1 may be able to process job 1 in a short time and may need a long time for job 2, while machine 2 may be able to process job 2 in a short time and may need a long time for job 1. That theQmenvironment is a special case is clear.
Identical machines in parallel with jobj being restricted to machine setMj is also a special case; the processing time of jobj on a machine that is not part ofMj has to be considered very long making it therefore impossible to process the job on such a machine.
The Rm ||
Cj problemcan be formulated as an integer programwith a special structure that makes it possible to solve the problem in polynomial time. Recall that if job j is processed on machine i and there are k−1 jobs following job j on this machinei, then job j contributes kpij to the value of the objective function. Letxikj denote 0−1 integer variables, wherexikj = 1
if jobj is scheduled as the kth to last job on machinei and 0 otherwise. The integer programis then formulated as follows:
minimize
m i=1
n j=1
n k=1
kpijxikj
subject to m
i=1
n k=1
xikj= 1, j= 1, . . . , n n
j=1
xikj≤1, i= 1, . . . , m, k= 1, . . . , n
xikj ∈ {0,1}, i= 1, . . . , m, k= 1, . . . , n j= 1, . . . , n The constraints make sure that each job is scheduled exactly once and each position on each machine is taken by at most one job. Note that the processing times only appear in the objective function.
This is a so-called weighted bipartite matching problem with on one side the njobs and on the other sidenmpositions (each machine can process at mostn jobs). If jobjis matched with (assigned to) positionikthere is a costkpij. The objective is to determine the matching in this so-called bipartite graph with a minimum total cost. It is known from the theory of network flows that the integrality constraints on thexikj may be replaced by nonnegativity constraints without changing the feasible set. This weighted bipartite matching problem then reduces to a regular linear program for which there exist polynomial time algorithms. (see Appendix A).
Note that the optimal schedule does not have to be a non-delay schedule.
Example 5.3.5 (Minimizing Total Completion Time with Unrelated Machines)
Consider 2 machines and 3 jobs. The processing times of the three jobs on the two machines are presented in the table below.
jobs 1 2 3 p1j 4 5 3 p2j 8 9 3
The bipartite graph associated with this problemis depicted in Figure 5.8.
According to the optimal schedule machine 1 processes job 1 first and job 2 second. Machine 2 processes job 3. This solution corresponds to
134 5Parallel Machine Models (Deterministic)
p11
p12
2p11 2p12
3p12 2p22 2p13
3p13 2p23
3p23 3p22
2p21 3p11
3p21 p11
1 2
p22
p13
p23 3
2, 3 2, 2
2, 1 1, 3
1, 2 1, 1
Job j
Position i, k
Fig. 5.8Bipartite graph forRm||
Cjwith three jobs
x121 = x112 = x213 = 1 and all other xikj equal to zero. This optimal schedule isnot non-delay (machine 2 is freed at time 3 and the waiting job
is not put on the machine). ||