• Tidak ada hasil yang ditemukan

Accuracy

Dalam dokumen Operations Research An Introduction (Halaman 139-143)

phase II. Use the feasible solution from Phase I as a starting basic feasible solution for the original problem

Step 1. Select a zero artificial variable to leave the basic solution and designate its row as the pivot row. The entering variable can be any nonbasic nonartificial vari-

2. Accuracy

In AMPL, the sensitivity analysis report is readily available. File amplTOYCO.txt provides the code necessary to determine the sensitivity analysis output. It requires the following additional statements (the report is sent to file a.out):

option solver cplex;

option cplex_options ‘sensitivity’;

solve;

#---sensitivity analysis display oper.down,oper.current,oper.up,oper.dual>a.out;

display x.down,x.current,x.up,x.rc>a.out;

The CPLEX option statements are needed to obtain the standard sensitivity analysis report. In the TOYCO model, the indexed variables and constraints use the root names

x and oper, respectively. Using these names, the suggestive suffixes .down, .current, and .up in the display statements automatically generate the formatted

sensitivity analysis report in Figure 3.13. The suffixes .dual and .rc provide the dual price and the reduced cost, respectively.

: oper.down oper.current oper.up oper.dual :=

1 230 430 440 1

2 440 460 860 2

3 400 420 1e+20p 0

: x.down x.current x.up x.rc :=

1 -1e+20 3 7 -4

2 0 2 10 0

3 2.33333 5 1e+20 0

FiGure 3.13

AMPL sensitivity analysis report for the TOYCO model

3.7 cOmPutAtiOnAL issues in LineAR PROGRAmminG

13

This chapter has presented the details of the simplex algorithm. Subsequent chapters present other algorithms: the dual simplex (Chapter 4), the revised simplex (Chapter 7), and the interior point (Chapter 22 on the website). Why the variety? The reason is that each algorithm has specific features that can be beneficial in the development of robust computer codes.

An LP code is deemed robust if it satisfies two fundamental requirements:

3.7 Computational issues in Linear Programming 139

This section explains the transition from basic textbook presentations to current state-of-the-art robust LP codes. It addresses the issues that affect speed and accuracy and presents remedies for alleviating the problems. It also presents a comprehensive framework regarding the roles of the different LP algorithms (simplex, dual simplex, revised simples, and interior point) in the development of numerically stable computer codes. The presentation is purposely kept math free to concentrate on the key concepts underlying successful LP codes.

1. Simplex entering variable (pivot) rule. A new simplex iteration determines the entering and leaving variables by using the optimality and feasibility criteria. Once the two variables are determined, pivot-row operations are used to generate the next sim- plex tableau.

Actually, the optimality criterion presented in Section 3.3.2 is but one of several used in the development of LP codes. The following table summarizes the three promi- nent criteria:

Entering variable rule Description

Classical (Section 3.3.2) The entering variable is the one having the most favorable reduced cost among all nonbasic variables.

Most improvement The entering variable is the one yielding the largest total improvement in the objective value among all nonbasic variables.

Steepest edge14 The entering variable is the one that yields the most favorable normalized reduced cost among all nonbasic variables. The algorithm moves along the steepest edge leading from the current to a neighboring extreme point.

14See D. Goldfarb and J. Reid, “A Practicable Steepest Edge Simplex Algorithm,” Mathematical Programming, Vol. 12, No. 1, pp. 361–371, 1977.

For the classical rule, the objective row of the simplex tableau readily provides the reduced costs of all the nonbasic variables with no additional computations. On the other hand, the most improvement rule requires considerable additional computing that first determines the value at which a nonbasic variable enters the solution and then the result- ing total improvement in the objective value. The idea of the steepest edge rule, though in the “spirit” of the most improvement rule (in the sense that it indirectly takes into account the value of the entering variable), requires much less computational overhead.

The trade-off among the three rules is that the classical rule is the least costly computationally but, in all likelihood, requires the highest number of iterations to reach the optimum. On the other hand, the most improvement rule is the most costly computationally but, most likely, entails the smallest number of simplex iterations. The

steepest edge rule seems to represent a happy medium in terms of the amount of ad-

ditional computations and the number of simplex iterations. Interestingly, test results show that the payoff from the additional computations in the most improvement rule seems no better than for the steepest edge rule. For this reason, the most improvement

rule is rarely implemented in LP codes.

Although the steepest edge rule is the most common default for the selection

of the entering variable, successful LP codes tend to use hybrid pricing. Initially, the

simplex iterations use (a variation of) the classical rule. As the number of iterations increases, a switch is made to (a variation of) the steepest edge rule. Extensive compu- tational experience indicates that this strategy pays off in terms of the total computer time needed to solve an LP.

2. primal vs. dual simplex algorithm. This chapter has mainly concentrated on the details of what is sometimes referred to in the literature as the primal simplex method.

In the primal algorithm, the starting basic solution is feasible but nonoptimal. Successive iterations remain feasible as they move toward the optimum. A subsequent algorithm, called the dual simplex, was developed for LPs that start infeasible but (better than) optimal and move toward feasibility, all the while maintaining optimality. The final iteration occurs when feasibility is restored. The details of the dual algorithm are given in Chapter 4 (Section 4.4.1).

Initially, the dual algorithm was used primarily in LP post-optimal analysis (Section 4.5) and integer linear programming (Chapter 9), but not as a standalone algorithm for solving the LPs. The main reason is that its rule for selecting the leaving variable was weak. This all changed, however, when the idea of the primal steepest edge rule was adapted to determine the leaving variable in the dual simplex algorithm.

15

Today, the dual simplex with the steepest-edge adaptation is proven in the majority of tests to be twice as fast as the primal simplex, and it is currently the dominant all- purpose simplex algorithm in the major commercial codes.

3. revised simplex vs. tableau simplex. The simplex computations presented ear- ly in this chapter (and also in Chapter 4 for the dual simplex) generate the next simplex tableau from the immediately preceding one. The following reasons explain why the tableau simplex is not used in any commercial LP codes:

(a) Most practical LP models are highly sparse (i.e., contain a high percent- age of zero coefficients in the starting iteration). Available numerical methods can reduce the amount of local computations by economiz- ing (even eliminating) arithmetic operations involving zero coefficients, which in turn can substantially speed up computations. This is a serious missed opportunity in tableau computations because successive tableaus can quickly populate the tableau with nonzero elements.

(b) The machine roundoff error and digit loss, inherent in all computers, can propagate quickly as the number of iterations increases, possibly leading to serious loss of accuracy, particularly in large LPs.

(c) Simplex row operations carry out more computations than needed to generate the next tableau (recall that all that is needed in an iteration is the entering and leaving variables). These extra computations represent wasted computer time.

The revised simplex algorithm presented in Section 7.2 improves on these draw- backs. Though the method uses the exact pivoting rules as in the tableau method, the main difference is that it carries out the computations using matrix algebra. More details on this point are in Section 7.2.3 following the presentation of the revised simplex algorithm.

15See J. Forrest and D. Goldfarb, “Steepest-Edge Simplex Algorithm for Linear Programming,” Mathematical Programming, Vol. 57, No. 3, pp. 341–374, 1992.

3.7 Computational issues in Linear Programming 141

4. Barrier (interior point) algorithm vs. simplex algorithm. The interior point algorithm (see Section 22.3 on the website) is totally different from the simplex algo- rithm in that it cuts across the feasible space and gradually moves (in the limit) to the optimum. Computationally, the algorithm is polynomial in problem size. The simplex algorithm, on the other hand, is exponential in problem size (hypothetical examples have been constructed where the simplex algorithm visits every corner point of the solution space before reaching the optimum).

The interior point algorithm was initially introduced in 1984 and, surprisingly, was patented by AT&T and sold on a specialized computer (apparently for an exuberant fee) without releasing its computational details. Eventually, the scientific community

“got busy” and discovered that the interior point method had roots in earlier nonlinear programming algorithms of the 1960s (see, e.g., the SUMT algorithm in Section 21.2.5).

The result is the so-called barrier method with several algorithmic variations.

For extremely large problems, the barrier method has proven to be consider- ably faster than the fastest dual simplex algorithm. The disadvantage is that the barrier algorithm does not produce corner-point solutions, a restriction that limits its applica- tion in post-optimal analysis (Chapter 4) and also in integer programming (Chapter 9).

Although methods to convert a barrier optimum interior point to a corner-point solu- tion have been developed, the associated computational overhead is enormous, limiting its use in such applications as integer programming, where the frequent need for locat- ing corner-point solutions is fundamental to the algorithm. Nevertheless, all commercial codes include the barrier algorithm as a tool for solving large LPs.

5. Degeneracy. As explained in Section 3.5.1, degenerate basic solutions can result in cycling, which can cause the simplex iterations to stall indefinitely at a degen- erate corner point without ever reaching termination. In early versions of the simplex algorithm, degeneracy and cycling were not incorporated in most codes because of the assumption that their occurrence in practice was rare. As instances of more difficult and larger problems (particularly in the area of integer programming) were tested, computer roundoff error gave rise to degeneracy/cycling-like behavior that caused the computations to “stall” at the same objective value. The problem was circumvented by interjecting conditional random perturbation and shifting in the values of the basic variables.

16

6. Input model conditioning (pre-solving). All commercial LP modeling languag- es and solvers attempt to condition the input data prior to actually solving it. The goal is to “simplify” the model in two key ways:

17

(a) Reducing the model size (rows and columns) by identifying and removing redundant constraints and by possibly fixing and substituting out variables.

(b) Scaling the coefficients of the model that are widely different in magni- tude to mitigate the adverse effect of digit loss when manipulating real numbers of widely different magnitudes.

17See L. Bearley, L., Mitra, and H. Williams, “Analysis of Mathematical Programming Problems Prior to Applying the Simplex Algorith,” Mathematical Programming, Vol. 8, pp. 54–83, 1975.

16See P. Harris, “Pivot Selection Methods of the devex LP Code,” Mathematical Programming, Vol. 5, pp. 1–28, 1974.

Figure 3.14 summarizes the stages of solving an LP problem. The input model can be fed via a pre-solver to a solver, such as CPLEX or XPRESS. Alternatively, a con- venient modeling language, such as AMPL, GAMS, LINDO, MOSEL, or MPL, can be used to model the LP algebraically and then internally pre-solve and translate its input data to fit the format of the solver. The solver then produces the output results in terms of the variables and constraints of the original LP model.

7. advances in computers. It is not surprising that in the last quarter of a century, computer speed has increased by more than one-thousand fold. Today, a desktop com- puter has more power and speed than the supercomputers of yesteryears. These hardware advances (together with the algorithmic advances cited earlier) have made it possible to solve huge LPs in a matter of seconds as opposed to days (yes, days!) in the past.

bibLiOGRAPHy

Bazaraa, M., J. Jarvis, and H. Sherali, Linear Programming and Network Flows, 4th ed., Wiley, New York, 2009.

Chvátal, V., Linear Programming, Freeman, New York, 1983.

Dantzig, G., Linear Programming and Extensions, Princeton University Press, Princeton, NJ, 1963.

Dantzig, G., and M. Thapa, Linear Programming 1: Introduction, Springer, New York, 1997.

Nering, E., and A. Tucker, Linear Programming and Related Problems, Academic Press, Boston, 1992.

Taha, H., “Linear Programming,” Chapter II-1 in Handbook of Operations Research, J. Moder and S. Elmaghraby (eds.), Van Nostrand Reinhold, New York, 1987.

Case Study: Optimization of heart Valves production

18

Dalam dokumen Operations Research An Introduction (Halaman 139-143)