• Tidak ada hasil yang ditemukan

An efficient algorithm for linear programming

N/A
N/A
Protected

Academic year: 2024

Membagikan "An efficient algorithm for linear programming"

Copied!
7
0
0

Teks penuh

(1)

9 Printed in India.

An efficient algorithm for linear programming

V C H V E N K A I A H *

SERC and Department of Applied Mathematics, Indian Institute of Science, Bangalore 560012, India

* Present address: Central Research Laboratory, Bharat Electronics Ltd., 25, M G Road, Bangalore 560001, India

MS received 22 September 1989; revised 17 May 1990

Abstract. A simple but efficient algorithm is presented for linear programming. The algorithm computes the projection matrix exactly once throughout the computation unlike that of Karmarkar's algorithm where in the projection matrix is computed at each and every iteration. The algorithm is best suitable to be implemented on a parallel architecture.

Complexity of the algorithm is being studied.

Keywords. Direction vector; Karmarkar's algorithm; Moore-Penrose inverse; orthogonal projection; projection matrix; projective transformation.

I. Introduction

In 1979 Khachiyan gave an algorithm called ellipsoid algorithm for linear p r o g r a m m - ing [1]. It has polynomial-time complexity. But it is found to be not superior to the simplex algorithm in practice. A new polynomial-time algorithm based on projective transformation technique was published by K a r m a r k a r 1984 [2-1. It is being said that this algorithm is superior to the simplex algorithm even in practice.

In this paper an algorithm is presented [3]. We feel that this algorithm is efficient than the existing algorithms because

1. it computes the projection matrix P = I - A § A, which requires

O(mn 2)

operations, exactly once throughout the c o m p u t a t i o n and needs only O(n 2) operations per iteration.

2. it makes use of the information that is available from the recently computed point.

Note that K a r m a r k a r ' s algorithm computes the projection matrix at each iteration and hence requires O(n 2"5) operations per iteration.

2. Main idea of the algorithm Consider the following problem

minimize C ' X

subject to X ~>0. (P1)

This problem is trivial because the solution is obtained by setting those components 295

(2)

296 V Ch Venkaiah

of X to zero that correspond to positive components of C and other components of X to 'infinity'. We defer from the usual practice to solve the above problem by an iterative method with an initial feasible solution X ~ > 0. The method is described in the following algorithm.

Al#orithm AI

Step 1. Compute an initial feasible solution X ~ > 0.

Step 2. S e t Y = C a n d K = 0 . Step 3. Compute

2 = r a i n ' : Y~>O Step 4. Compute

X k + 1 = X k _ e2Y where 0 < e < 1.

Step 5. If the optimum is achieved then stop.

Step 6. Set K = K + 1, D k = diag (X k

x2k

X3.. . k Xn )k Step 7. Y = Dg C, go to step 3.

DkC can be thought of angular projection of C onto the coordinate axis.

N o w consider the original problem minimize CtX

subject to AX = b,

X ~> 0. (P2)

Let A + = A'(AA')-, - - d e n o t e s the generalized inverse. A + is called the M o o r e - P enrose inverse of A. Also, let P = I - A + A. It can be proved that the columns of P span the null space of A and are normals to the hyperplanes X i = 0 in the null space of A. P is the projection operator that projects every vector in R" orthogonally onto the null space of A.

To use Algorithm A1 to solve P2, we need to have the initial feasible solution X ~ such that AX ~ = b and the direction vector Y not in R" but in the null space of A.

This can be achieved by operating P on this vector. With this explanation we now give an algorithm to solve P2.

3. The algorithm 3.1 Algorithm A2

Step 1. Compute an initial feasible solution

X 0 __ - - ( X 1 X 2 X 3 . . . 0 0 0 X O)

such that X ~ > 0 for i = 1, 2 .... , n.

(3)

Step

2, C o m p u t e the projection o p e r a t o r

p = I - A + A

Step

3. C o m p u t e Cp = P C

y = Cp IICpll

Step

4. Set K --- 0

Step

5. C o m p u t e

2 = man { ~ k : Y i > O }

the problem has u n b o u n d e d solution if all Yi ~< 0 and at least one Y~ < 0. If all Y~ = 0 then X ~ is a solution.

Step

6. C o m p u t e

X k + 1 = X k _ e2Y where 0 < e < 1.

Step

7. If

II X k ~ 1 _ Xk II < 2 - L

where L is defined as in [2], then stop.

Step

8. Set

K = K + 1, DR =diag(dii),

where d, =

x k / ~

and is the distance between the point X k and the hyperplane

Xi = O. P,

is positive because pz = p.

Step

9. C o m p u t e

y _ PDkCp

II

eDk

Cp II go to step 5,

Distance 6 between the point X k and the hyperplane X~ = 0 is calculated as follows.

Consider

xk 6 Pi ~X~_6x~.=O

liP, l[

Since

Pi/ll Pi

II is the unit normal to the hyperplane

Xi

= 0 and IIP~ II = ~ , This implies

~ _ X~

(4)

298 V Ch Venkaiah

Step 3 computes the orthogonal projection of C onto the null space of A and the initial direction vector, step 5 computes the step length, step 6 a new feasible solution at which the objective function value is less than that of the previous value and step 9 the new direction vector and its projection on to the null space of A. It can be easily seen that the computation of the algorithm is dominated by the computation of the new direction vector which requires only O(n 2) operations. Whereas in Karmarkar's algorithm it requires O(n 2"5) operations.

3.2 Correctness of algorithm A2

Theorem 1. Let P2 have a bounded solution. Then alyorithm A2 converges to a solution of P2.

Proof.

(i) Nonnegativity conditions Since X ~ > 0,

2 = m i n ' : Yi>O ,

and e < 1 it follows that X k § 1 > 0 for each k and hence lim X~ >~ 0

k ~ o O

for each i.

(ii) Equality constraints Since AX ~ = b and for k = 0

ACp APC _ 0 since Cp = PC A Y -

IlCpll- IlCpl~

it follows that AX ~ = b.

For k ~> 1,

A Y 1 APDkCp = 0

IIeDkCpll

and hence AXk= b.

(iii) Optimality of the objective function F o r k = 0, we have

Cp

C , X ~ _ C , X ~ + x = ~ 2 C ' IlfPll

1

- IlCpll ~ , ~ C ' C p

- ] [ C p l l - ~ 2 f ' P f p since PCp = Cp 1

_ 1 [l~2CtPtC p since P = P t Jlfp

(5)

F o r k i> 1,

1

- II Cp II t2(PC)tCP

-[ICpII eACp~Cp since P C = Cp 1

: ~A II Cp II > 0.

CtX k _ CtXk+ 1 = e~,Cty

- ][

PDk

1 Cp I] ~2Ct

PDk

Cp

- I[

PD~

Cp II Cp'Dk Cp

- II

PDk

Cp

II (~-k-k

Cp)' ( x / ~ k CP) where .~/-D--kk = diag ( v / ~ , ) . ~ is real because

dll

> 0 . Hence

C t X k - C' X k + 1 _ e;t II ~'-D~-~ Cp II 2 > 0.

I l e O ~ C p [ I

Therefore CtX k is a decreasing sequence. Since P2 is assumed to have a b o u n d e d solution it follows that C ' X k is b o u n d e d below. Hence CtX ~ converges.

3.3

Complexity of algorithm A1

Let L be such that 2 L is numerical infinity and 2 - L is numerical zero.

Theorem 2.

Algorithm

A1

converges to the solution of

P1

in O(L) steps.

Proof.

W i t h o u t loss of generality, we assume X ~ = (1 1 1... 1)'. T h e n

X I = X o _ e 2 x C ,

~'1 = min{c~.:Ci > 0 }- Let 21 --

1/C~.

T h e n

x ~ = 1 - ~ - - . Ci Ci Consider

x ? = x , ' - ~ 2 x l c , ,

{ }

'~2 = rain ~ . Ci > 0 since X~ > 0

1

C /

(6)

300 V Ch Venkaiah

Therefore

In general,

C , k

If Cz < 0 then X k converges to 2 L for

because

k = - - L

l~

(1

-e-'cCi)k~

2 L ~ k log2 ( I - eC~.) ~> L L

~ k ~> l o g 2 ( 1 - eC~.)"

Similarly, if Ci > 0 then X~ converges to 2- L for - L

k =

Therefore, Algorithm A1 converges to the solution in

O(L)

steps.

4. Conclusions

Complexity of Algorithm A2 is being studied. Observe that 21 = 22 . . . . in Algorithm A1. This observation may be useful in establishing the complexity of Algorithm A2.

We feel that the path followed by our algorithm is the same as that of Karmarkar's algorithm. Therefore, Algorithm A2 takes at the most

O(n)

steps to reach the optimum and hence the complexity of Algorithm A2 is

O(n3).

In practice the calculation need to be done in high precision.

Modifying the direction vector Y =

(PDkCp)/] I

PDkCpt] in Step 9 of Algorithm A2 as Y =

(PD~,kCp)/II PD~"Cp

II by introducing a real parameter ~k, another algorithm will be obtained. The resulting algorithm will be better in performance and can handle practical problems with the existing precision if the optimal values can be computed for ~k- Further details on computing 7k will be discussed in a future correspondence.

(7)

Acknowledgements

The author thanks Dr Eric A Lord for the discussions and Dr S K Sen for getting him research associatcship in SERC for pursuing this research work.

References

[1] Khachiyan L G, A polynomial algorithm in linear programming, Doklady Akademiia Nauk SSSR 244:S (1979), pp. 1093-1096, translated in Soviet Mathematics Doklady 20:1 (1979), pp. 191-194 [2] Karmarkar N, A new polynomial - time algorithm for linear programming, Tech. Rep., AT & T Bell

Labs., New Jersey, (1984)

[3] Venkaiah V Ch, An efficient algorithm for linear programming, Tech. Rep., TR\SERC\KBCS\89 - 003, SERC, IISc, Bangalore 560012, (1989)

Referensi

Dokumen terkait

Roos, Terlaky and Vial presented an interior-point method using primal-dual full-Newton step algorithm that requires the best known upper bound for the iteration

This paper aims to improve the performance of the the maximum distance on-demand routing algorithm (MDORA) protocol and compare MDORA-without direction (MDORA-WD)

By A, this coloring scheme is associated with the original CONCORD-CD algorithm: all the coordinate descent updates of the off-diagonal elements run serially.. On the other hand,

It has 3 key features: 1 Separation of robot behaviour description and the behaviour execution 2 Combining robot actions, user inputs and the graphical user interface GUI 3 Behaviour