The Buffer Allocation Problem
5.4 Solution Approaches to the BAP in Longer Lines
Table 5.3. Searching in classes[1,0],[1,1]and[1,2]
Iteration Equivalence Buffer Throughput Average WIP
# buffer class allocation XK=5 W IP
19 (1,0,0,4) 0.5438
20 (1,0,1,3) 0.5801
21 [1,0] (1,0,2,2) 0.5935
22 (1,0,3,1) 0.5963 4.8100
23 (1,0,4,0) 0.5860
24 (1,1,0,3) 0.5857
25 (1,1,1,2) 0.6202 5.1638
26 [1,1] (1,1,2,1) 0.6275 5.4941
27 (1,1,3,0) 0.6096 6.1794
28 (1,2,0,2) 0.6012 5.5889
29 [1,2] (1,2,1,1) 0.6275 5.8978
30 (1,2,2,0) 0.6114 6.5231
Gradient methods
The well-known numerical analysis approach, gradient method (see Ho et al., 1979 and Gershwin and Schor, 2000), may be adopted to solve problem BAP-A, and the steps of an appropriate algorithm are given below:
Step 1: Specify an admissible set of initial buffer allocations and use the evaluative technique to determine the throughput XK(B2,B3,...,BK)of the line.
Step 2: Calculate the gradient g= (g2,g3,...,gK)given by
gi=X(B2,...,Bi+δBi,...,BK)−X(B2,...,Bi,...,BK) δBi
whereδBiis a step size of integer value.
Step 3: Obtain the search direction v by projecting the gradient g= (g2,g3,...,gK) on to the hyperplane such that:
∑
K i=2δBi=0 giving vi=gi−g, where,¯¯ g= 1
K−1
∑
K i=2gi. Step 4: FindΛsuch that X(B+Λv)is maximized.
Step 5: Define B=B+Λv.
Step 6: If Bis close to B, Stop (B is the optimal buffer allocation), otherwise, let B=Band go to Step 2 and continue.
A difficulty of the above procedure arises if Bi=0 and vi<0 for some i. In that case, the new direction is calculated by deleting the ith component of g and setting vi=0.
The other components of v are determined as before and the process is continued until a feasible v is determined or all components of g are deleted.
In Gershwin and Schor (2000), an algorithm for the solution of the BAP-B prob- lem, designated primal by the authors, using the solution of the BAP-A problem is given. Gershwin and Schor noted that the maximum throughput as a function of the total buffer space, N, for a three-station two-buffer system is monotonically increas- ing in N and may be approximated by two linear functions with different slopes.
This observation is crucial to the development of an algorithm for the solution of the BAP-B problem, the steps of which are given below.
Step 1: Let N0= (0,0,...,0). Calculate Xmax(0,0,...,0)using an appropriate eval- uative method (Markovian for short lines and decomposition for longer lines).
Step 2: Specify a new estimate of the total buffer slots to be allocated, N1and solve problem BAP-A (designated as dual problem by Gershwin and Schor) using the gradient algorithm given above. Set j=2.
Step 3: Calculate Njfrom
Nj=aNj−1+bNj−2,
where a and b are determined from the assumed linear approximation between maximum throughput and total buffer size. Note that because of the two linear approximations, the slope of the line will be different from one range of total buffer size to another.
Step 4: Use the gradient algorithm to determine Xmax(Nj).
Step 5: If Xmaxis sufficiently close to the desired throughput level, stop, otherwise return to Step 3.
The solution specifies N, the total number of buffer slots, and the distribution of N among the intermediate K−1 buffers of the production line.
Dynamic programming
In general, dynamic programming, DP, is a multi-stage decision process where the objective is to allocate a limited resource sequentially over the stages so as to opti- mize the objective function based on Bellman’s principle of optimality in that, at any stage in the analysis, the optimal solution involves being on an optimal path from that stage onwards. In general terms, with respect to problem BAP-B the stations are considered to be the stages and the total number of buffer slots to be allocated are considered to be the states. An appropriate recursive relationship must be devel- oped having in mind the overall objective function of minimizing the total number of buffer slots to be allocated among the intermediate buffers of the line subject to a minimum throughput. The structure of the BAP-B problem may be utilized to effect computational efficiencies with a dynamic programming approach.
Simulated annealing
Simulated annealing, SA, is an adaptation of the simulation of physical thermo- dynamic annealing principles to combinatorial optimization problems. It follows a logical improvement paradigm having regard to the exponential complexity of the solution space. The algorithm is based on randomization and as a result there is no certainty in relation to achieving the precise optimal solution. Spinellis and Papadopoulos (2000a) used a simulated annealing approach for the solution of the buffer allocation problem BAP-A in reliable (large) production lines. The authors assumed exponential processing times at each station with mean service rates μi, i=1,...,K. The exact numerical algorithm of Heavey, Papadopoulos and Browne (1993) and the decomposition algorithm A3 of Dallery and Frein (1993) were used in conjunction with the simulated annealing algorithm developed by the authors.
In the improvement process of the simulated annealing approach, the probabilistic
“uphill” energy movement avoids the entrapment of the solution in a local mini- mum. The authors gave the correspondence between annealing in the physical world and simulated annealing used in the optimal buffer allocation as shown in Table 5.4.
The associated simulated annealing algorithm is given in Figure 5.5.
Table 5.4. Correspondence between annealing in the physical world and simulated annealing used for production line optimization
Physical World Production Line Optimization
Atom placement Line configuration
Random atom movements Buffer space, server, work-load movement
Energy, E Throughput, X
Energy differential,ΔE Configuration throughput differential,ΔX
Energy state probability distribution Changes according to the Metropolis criterion, exp(−ΔTE) > rand(0...1), implementing the Boltzmann probability distribution
Temperature Variable for establishing configuration acceptance termination
The SA procedure was run by Spinellis and Papadopoulos (2000a) with the following characteristics based on the number of stations K:
Maximum trials at given temperature: 100×K Maximum successes at given temperature: 10×K Initial temperature: 0.5
Cooling schedule: Exponential: TJ+1=0.9TJ
Initial line configuration: Equal division of buffers and servers among stations with any remaining resources placed on the station(s) in the middle
Reported time: Elapsed wall clock time in seconds.
As the reader will note, the algorithm given in Figure 5.5 is a simulated annealing algorithm for not only distributing N total buffer slots but also simultaneously S(S≥ K)servers and work-load normalized to K.
The algorithm is available at the website associated with this text with abbre- viated name SA and may be accessed by selecting the corresponding genera- tive/optimization algorithm.
The authors demonstrated the accuracy of the simulated annealing approach com- pared to the complete enumeration for shorter lines with up to 9 stations, and in the case of longer lines (15 stations), a comparison was made between the solutions obtained using reduced enumeration and decomposition and simulated annealing and decomposition. For comparing the results, the authors used the formalism S(G,E) to describe a closed loop system using the generative method G and the evaluative method E. A major contribution of this work was that solutions (buffer allocations) were achieved using a simulated annealing for very large lines (with up to 400 sta- tions). Clearly, only decomposition may be used as evaluative model in these cases.
With respect to these large lines it should be noted that the time requirements of the simulated annealing method becomes competitive with other methods as the num- ber of stations and the available buffer size increases as indicated in Figure 5.6 and Figure 5.7. Figure 5.8 shows that the number of enumerations required for simulated annealing solutions increases linearly with the number of stations.
1.[Setinitiallineconfiguration.]SetBi←N/(K−1),1setBK/2←BK/2+N−∑
K i=N/(K−1),sets←S/K,sets←s+S−iK/2K/22 K i=S/K,setw←1/K.∑i1 2.[SetinitialtemperatureT.]SetT←0.5.0 3.[Initializestepandsuccesscount.]SetI←0,setU←0. 4.[Createnewlinewitharandomredistributionofbufferspace,servers,orwork-load.] (i)[Createacopyoftheconfigurationvectors.]Setn←n,sets←s,setw←w. (ii)[Determinewhichvectortomodify.]Set←rand[0...2]. (iii)if=0[Createnewlinewitharandomredistributionofbufferspace.]MoverspacefromasourcebufferBtoadestinationbufferB:setnrrsd r←rand[2...K),setr←rand[2...K),setr←rand[1...B+1),setB←B−r,setB←B+r.snrrrnrrndsssdd (iv)if=1[Createnewlinewitharandomredistributionofserverallocation.]Moverserversfromsourcestationstoadestinationstationnrs s:setr←rand[1...K),setr←rand[1...K),setr←rand[1...s+1),sets←s−r,sets←s+r.rsnrrrnrrnddsssdd (v)if=2[Createnewlinewitharandomredistributionofwork-load.]Moverwork-loadfromsourcestationstoadestinationstations:nrrsd setr←rand[1...K),setr←rand[1...K),setr←rand(0...w],setw←w−r,setw←w+r.snrrrnrrndsssdd 5.[Calculateenergydifferential.]SetΔE←X(n,s,w−X(n,s,w). 6.[Decideacceptanceofnewconfiguration.]Acceptallnewconfigurationsthataremoreefficientand,followingtheBoltzmannprobability −ΔEdistribution,somethatarelessefficient:ifΔE<0orexp()>rand(0...1),setn←n,sets←s,setw←w,setU←U+1.T 7.[Repeatforcurrenttemperature.]SetI←I+1.IfI<maximumnumberofsteps,gotostep4. 8.[Lowertheannealingtemperature.]SetT←cT(0<c<1). 9.[Checkifprogresshasbeenmade.]IfU>0,gotostep3;otherwisethealgorithmterminates. 1Weemploythenotationforopenintervals(a...b),closedintervals[a...b],andthefloorfunctionxtocreatesetsofrandomnumbersinagiven range.Forthispurposeweuseafunctionrand()generatingrandomrealnumbersintherangespecifiedbyaninterval.Specifically,rand(a...b) willproducearandomintegerx:a<x≤b,andrand[a...b]willproducearandomintegerx:a≤x≤b.Thehalfclosedintervals,[a...b)and (a...b],workinasimilarway. Fig.5.5.SimulatedannealingalgorithmfordistributingNbufferspace,Sservers,andKwork-loadinaK-stationline
1 10 100 1000 10000 100000 1e+006
0 5 10 15 20
Evaluated configurations
Buffer space 9 stations; 1–20 buffers
S(CE, Deco-1) S(RE, Deco-1) S(SA, Deco-1)
Fig. 5.6. Performance of simulated annealing S(SA, Deco) compared with complete S(CE, Deco) and reduced S(RE, Deco) enumerations for 9 stations (left, middle) (Note the log10 scale on the ordinate axis)
10 100 1000 10000 100000 1e+006 1e+007 1e+008 1e+009 1e+010 1e+011
0 5 10 15 20 25 30
Evaluated configurations
Buffer space 15 stations; 1–30 buffers
S(CE, Deco-1) S(RE, Deco-1) S(SA, Deco-1)
Fig. 5.7. Performance of simulated annealing S(SA, Deco) compared with complete S(CE, Deco) and reduced S(RE, Deco) enumerations for 15 stations (left, middle) (Note the log10 scale on the ordinate axis)
Genetic algorithms
Another optimization method for solving the buffer allocation problem is based on genetic algorithms. These are global stochastic optimization techniques that avoid a number of the shortcomings exhibited by local search techniques on difficult search spaces and are based on the mechanics of natural selection and genetics. Spinellis and Papadopoulos (2000b) used this approach for reliable lines with exponentially
0 200000 400000 600000 800000 1e+006 1.2e+006 1.4e+006
0 50 100 150 200 250 300 350 400
Evaluated configurations
Station number
SA for 20–400 stations; #buffers = 3 * stations
Fig. 5.8. Number of enumerations required for simulated annealing vs. the number of stations (Note the log10scale on the ordinate axis)
distributed service times with mean service rates μi,i=1,...,K for the solution of the BAP-A problem.
Genetic algorithms rely on modeling the buffer allocation problem as a popula- tion of organisms. Each organism represents a possible valid solution to the problem.
Organisms are composed of alleles representing parts of a given solution. In each iteration which corresponds to a generation, a new population is created by retaining all solutions and generating new solutions from the previous population using three basic genetic operators, viz., reproduction operator, crossover operator and mutation operator.
The genetic algorithm for solving the BAP-A problem is available at the website associated with this text with abbreviated name GA. This algorithm may be described in the following steps:
1. [Initialize a population of size M.]Set P0...M,0...N← rand[0...K−1).
2. [Evaluate population members creating throughput vector X.]For i←0...M: set Xi←XK(Pi).
3. [Create roulette selection probability vector R.]Set Ri←∑ij=0(Xj/∑Mk=0Xk). 4. [Create new population using crossovers from the previous population.] For
i←0...M: if rand[0...1)<crossover rate, set c← rand[0...M), set Pi,0...c←
PRr,0...c, set Pi,c+1...N ←PRr,c+1...N; otherwise set Pi←PRr by selecting each r
using the roulette selection probability vector so that Rr≤rand[0...1)<Rr+1. 5. [Introduce mutations.] For i ← 0...M: for j ← 0...N: if rand[0...1)
<mutation rate, set Pi,j ← rand[0...K−1).
6. [Keep fittest organism for elitist selection strategy.]Select k so that Xk≥X0...M, set Prand[0...M)←Pk.
7. [Make new population the current population.]Set P←P.
8. [Loop based on the population’s variance.]If∑Pi=0|Xk−Xi|>maximum variance go to step 2; otherwise the algorithm terminates with the optimal line setup in Pk.
The implementation of genetic algorithms may be tuned using different parameters.
Spinellis and Papadopoulos used the parameters that Grefenstette (1986) derived:
• a populations size of 50,
• a crossover rate of 0.6,
• a mutation rate of 0.0001,
• a generation gap of 1 (the entire population is replaced during each generation),
• no scaling window, and
• an elitist selection strategy (the organism with the best performance survives intact into the next generation).
The crossover points, the mutation rates and the selection of organisms are produced using the subtractive method algorithm described by Knuth (1981). The decompo- sition method was used as the evaluative method for calculating throughput. From this work it appears that simulated annealing approach is slower than the genetic algorithm in terms of computer time, but as the simulated annealing results in a higher throughput using the same evaluative method it could be argued that simu- lated annealing is the more accurate method. Details are shown in Figures 5.9 and 5.10, respectively.
Tabu search algorithm
According to Glover and Laguna (1998): “The word tabu or taboo comes from Tongan, a language of Polynesia, where it was used by the aborigines of Tonga island to indicate things that cannot be touched because they are sacred. According to
0 200000 400000 600000 800000 1e+006 1.2e+006 1.4e+006
0 50 100 150 200 250 300 350 400
Evaluated configurations
Station number
SA, GA for 20–400 stations; #buffers = 3 * stations
S(SA, Deco-1) S(GA, Deco-1)
Fig. 5.9. Performance of simulated annealing S(SA, Deco) compared with genetic algorithms S(GA, Deco) for large production lines
0.66 0.665 0.67 0.675 0.68 0.685 0.69
0 50 100 150 200 250 300 350 400
Line throughput
Number of stations
Throughput: 20–400 stations; #buffers = 3 * stations S(SA, Deco-1) S(GA, Deco-1)
Fig. 5.10. Accuracy of simulated annealing S(SA, Deco) compared with genetic algorithms S(GA, Deco) for large production lines
Webster’s dictionary, the word now means a prohibition imposed by social custom as a protective measure or of something banned as constituting a risk. These current more pragmatic senses of the word accord well with the theme of tabu search. The risk to be avoided in this case is that of following a counter-productive course, includ- ing one which may lead to entrapment without hope of escape. On the other hand, as in a broader social context where protective prohibitions are capable of being super- seded when the occasion demands, the tabus of tabu search are to be overruled when evidence of a preferred alternative becomes compelling.”
Tabu search was first introduced by Glover (1986) and a few of its basic ideas were also given by Hansen (1986).
Let x be a certain initial solution from a set of solutions,Λ, and G(x) be a neighborhood of this solution. The solution is feasible if it satisfies a certain set of constraints. Let also x∗ denote the best solution found so far, i be an iteration counter andΛ∗be a subset of solutions in the neighborhood G(x). f(x)denotes a function which is sought to be optimized, e.g., to be minimized. Tabu search is an iterative method more sophisticated than the ordinary descent method in at least two dimensions, as follows:
(i) It makes systematic use of memory in order to avoid re-visiting the same solutions considered previously.
(ii) It uses an elaborate exploration process in order to escape from a local minimum by using intensification and diversification. Intensification’s role is to ensure that the next solution in the search process is close enough to the current solution when both of them have common features. This can be achieved by adding an extra term in the objective function and penalizing solutions which are far from the current solution. On the other hand, the role of diversification is exactly the opposite one, viz., to guarantee that the next solution is far from the current
one when it is discovered that this solution does not have the desired features.
Mathematically, diversification is carried out by inserting another term in the objective function penalizing solutions that are close to the current solution. By performing intensification and diversification, the initial objective function, f , is modified to the following modified function, fm, which may be written as:
fm(x) =f(x) +intensification + diversification.
When a solution visited at a stage is not acceptable, then it has to be removed from the neighborhood G(x)as it is considered a tabu solution which should be avoided in the future iterations of the search process. That way a tabu list, T,=1,...,t, is created which is a collection of tabu conditions, t(x,h)which are some constituents t(x,h)that are given a tabu status to indicate that these constituents are currently not allowed to be involved in the next solution. t are functions of solution x or h, where h is a move applied to the solution x to direct it to a new solution, say, y=xh (symbol denotes the application of move h to x to obtain y). This move h is said to be a tabu move if all conditions are satisfied. (Usually, reversible moves are used. A move h is called a reversible move when there exists a move h−1such that:(xh)h−1=x.) However, sometimes, tabu move h may appear attractive because it gives a solution better than the best solution found so far in the search process. In that case, one would like to accept h in spite of its status. This may be done if it has an aspiration level,α(x,h), which is better than A(x,h), a given threshold value. If at least one of the following conditions:
α(x,h)∈A(x,h), =1,...,α,
is satisfied by the tabu move h applied to solution x, then move h will be accepted.
Tabu search uses a tabu list with variable size to prevent cycling of the solutions.
Following the lines of Hertz, Taillard and de Werra (1995), the steps of the Tabu search, TS, are summarized below:
Step 1: Choose an initial solution x inΛ. Set x∗=x and i=0.
Step 2: Set i=i+1 and generate a subsetΛ∗of solution in G(x,i)such that either one of the tabu conditions t(x,h)∈Tis violated(=1,...,t)or at least one of the aspiration conditionsα(x,h)∈A(x,h), =1,...,α, holds.
Step 3: Choose a best y=xh in Λ∗ with respect to f(x) or to the modified function fm(x)and set x=y.
Step 4: If f(x)<f(x∗)then set x=x∗. Step 5: Update tabu and aspiration conditions.
Step 6: If a stopping condition is met then stop. Else go to Step 2.
Many applications of tabu search are given in the book by Glover, Taillard, Laguna and de Werra (1992).
Summary
The following summary may be of value to the reader who wishes to use the soft- ware available at the website associated with this book in solving buffer allocation problems.
Buffer Allocation Problem (BAP) 1. BA
• For short unreliable production lines with Erlang-k (k≥1) service and repair times and exponential times to failure and intermediate buffers.
2. DECO-1 and SA/GA
• For large reliable exponential production lines with single machine stations and finite intermediate buffers.
3. DECO-2 and SA/GA
• For large reliable exponential production lines with multiple parallel identi- cal machine stations and finite intermediate buffers.