Operations Research Letters 27 (2000) 149–152
www.elsevier.com/locate/dsw
Why the logarithmic barrier function in convex and
linear programming?
Jean B. Lasserre
∗LAAS-CNRS, 7 Av. du Colonel Roche, 31077 Toulouse cedex 4, France
Received 1 December 1999; received in revised form 1 September 2000; accepted 6 September 2000
Abstract
We provide a simple interpretation of the use of the logarithmic barrier function in convex and linear programming. c 2000 Elsevier Science B.V. All rights reserved.
MSC:90C05; 90C25
Keywords:Convex programming; Linear programming; Interior point methods; Logarithmic barrier function
1. Introduction
Thelogarithmic barrier function(LBF) in convex and linear programming has become more and more popular in view of its good performances. It is, in fact, a particular choice among many others in the class of interior penalty functions. However, apart from its a posteriori numerical eciency, and the so-called “self-concordance” property (see, e.g. [2,4,5]), there has been so far no clue on where the LBF is coming from.
We shall demonstrate here that surprisingly enough, this function can be viewed as a (a priori) “naive” approximation in the interior of the feasible set. Given a measurable function f : Rn →Rand a Borel set
⊂Rnwith non-empty interior, the basic ingredient
∗Fax: +33-561-336936.
E-mail address:lasserre@laas.fr (J.B. Lasserre).
is the well-known approximation
sup
x∈
f(x)≈ 1
pln
Z
epf(x)dx
(1.1)
(for largep) of the global maximum offover the set
(see e.g. Hiriart-Urruty [6] and references therein). If we consider a convex program minx{f0(x)|
fi(x)60} with all fk:Rn → R, convex, k =
0;1; : : : ; m, then applying the approximation scheme (1.1) to the Lagrangian
H(; x):=f0(x) +
m
X
i=1
ifi(x);
in
sup
¿0
H(; x) =f0(x) + sup
¿0
" m
X
i=1
ifi(x)
#
;
(which is nothing less than f0 on the feasible set)
yields the LBF with controlling parameter :=p−1
150 J.B. Lasserre / Operations Research Letters 27 (2000) 149–152
(see also [8] for other approximation schemes of con-vex functions).
The approximation (1.1) could be considered as “naive” since the exact value 0 = supi¿0ifi(x) is
replaced by p−1lnR∞
0 epifidi. In addition, so far,
there has not been any convincing numerical report of the eciency of (1.1) when used directly in global optimization (see e.g. [6]).
In fact, if the LBF had been presented this way, one might had suspected it would not yield an ecient method.
However, it is indeed well suited. For instance, in linear programming, at the (unique) minimizer of the primal LBF, one retrieves as multipliers, the minimizer of the dual LBF (see e.g. [4]). In fact, we show that if we use the approximation (1.1) in Fenchel duality, one retrieves the dual LBF and vice versa.
The correspondence between Laplace and Fenchel transforms via exponentials and logarithms in the Cramer transform, has already been used to establish nice parallels between optimization and probability (see e.g. [1,3]) via a change of algebra. The interested reader is referred to [1,7] and the many references therein.
2. The logarithmic barrier function
We rst consider the general convex programming problem and then specialize to linear programming.
2.1. Convex programming
Consider the general convex programming problem (P):
(P)7→min{f0(x)|fi(x)60; i= 1;2; : : : ; m}; (2.1)
where fi : Rn → R are convex functions, i =
0;1; : : : ; m.
Leth:Rm→R∪ {−∞;+∞}be the optimal value of the parametrized problem:
y7→h(y) := inf{f0(x)|fi(x)6y; i= 1;2; : : : ; m}:
(2.2) It is assumed that Slater’s condition holds true, that is, there is somex0 such that
fi(x0)¡0 i= 1;2; : : : ; m: (2.3)
Following notation as in [2], the LBF associated with problem (P) is just
x7→(x; ) :=f0(x)
−
m
X
i=1
ln(−fi(x)); (2.4)
where ¿0 is the barrier parameter. Of course,
(x; ) is dened only on the set of points in the inte-rior of the feasible set of (P). Most of today’s inteinte-rior point methods are based on this function.
Whereas (x; ) could be just viewed as a partic-ular choice among many in the family of “interior” penalty functions, there has been so far no explanation of where it is coming from.
The purpose of this note is to show that surprisingly enough, is in fact an approximation of h(0) in an apparently very naive way.
Duality in convex programming implies that
sup
¿0
inf
x
(
f0(x) +
m
X
i=1
ifi(x)
)
= inf
x max¿0
(
f0(x) +
m
X
i=1
ifi(x)
)
; (2.5)
the “interesting” part being the left-hand side of (2.5) since the right-hand side is just a rephrasing of (2.1). But in fact, it is this right-hand side which, when ap-proximated via (1.1), yields the LBF.
Indeed, wheneverxis feasible for (P), one has
f0(x) = max
¿0
(
f0(x) +
m
X
i=1
ifi(x)
)
=f0(x) + max
¿0
( m
X
i=1
ifi(x)
)
;
or, equivalently,
f0(x) =f0(x) +
m
X
i=1
max
i¿0ifi(x)
and we show below that in fact
(x; ) =−mln+f0(x)
+
m
X
i=1
ln
Z ∞
0
e(fi(x)=)d
J.B. Lasserre / Operations Research Letters 27 (2000) 149–152 151
In other words, the term maxifi(x) which is exactly
equal to 0, is “approximated” by
ln
Z ∞
0
(efi(x))pd
1=p
which for largep(or small) is indeed an approxi-mation. Developing yields:
Z ∞ stant, minimizingreduces to minimizing
f0(x) +
In fact, this is more than just a coincidence and we now illustrate in linear programming how the above approximation is well suited for duality via Laplace and Legendre–Fenchel transforms.
2.2. Linear programming
We now restrict to the linear programming case and show how using this approximation, the LBF of the dual can be obtained.
Consider the LP problem
(P)7→min{c′
x|Ax¿b; x¿0}; (2.7)
where A is a (m; n) matrix. Again we assume that Slater’s condition holds at some pointx0, i.e., there is
somex0¿0 s.t.Ax0¿ b.
Leth : Rm→R∪ {−∞;+∞}be dened as
y7→h(y) := min{c′
x|Ax¿y; x¿0}: (2.8)
It is well known that h is convex and its Fenchel transformh∗
7→h∗
and the same approximation for the “supy”. It yields:
h∗
Interchanging the integration, we obtain
h∗
with everything well dened provided ¿0 and
A′
152 J.B. Lasserre / Operations Research Letters 27 (2000) 149–152
Hence, if we now expressh(b) via
h(b) = (h∗
)∗
(b) = sup
{b′
−h∗
()}
and use instead the above approximation (2.10) of
h∗
(), one gets
h(b)≈sup
(
b′
+1
p
m
X
i=1
ln(c−A′
)i+
1
p
m
X
i=1
ln(i)
)
;
(2.11) since we can remove the constant (m+n)p−1ln(p−1)
(note also that the latter constant vanishes asp→ ∞). We recognize in (2.11) the LBF associated with the dual (D) of (P).
In fact, from the above we also have shown that
−1
p
n
X
i=1
ln (c−A′
)i−
1
p
m
X
i=1
ln (i)
approximates thep−1ln of theLaplacetransform of
e−ph(·). Thus, ash(b) is the Fenchel tranform atbof
h∗
(·), maximizing the LBF in (2.11) is approximating the Cramer transform of e−ph(·)atb. The controlling
parameterthat appears in the LBF comes from the approximation of “supf” by “lnR
ef=”.
References
[1] F. Bacelli, G. Cohen, J. Olsder, J.P Quadrat, Syncronization and Linearity, Wiley, New York, 1992.
[2] R. Cominetti, J. San Martin, Asymptotic analysis of the exponential penalty trajectory in linear programming, Math. Progr. 67 (1994) 169–187.
[3] P. Del Moral, G. Salut, Maslov optimization theory, LAAS-report 94461, 1994.
[4] D. den Hertog, Interior Point Approach to Linear, Quadratic and Convex Programming, Kluwer, Dordrecht, 1994. [5] O. Guller, L. Tunc el, Characterization of the barrier parameter
of homogeneous convex cones, Math. Progr. 81 (1998) 55–76.
[6] J.B. Hiriart-Urruty, Conditions for global optimality, in Handbook of Global Optimization, R. Horst, P.M. Pardalos (Eds.), Kluwer, Dordrecht, 1994.
[7] V.P. Maslov, Methodes Operatorielles, Editions Mir, Moscou, 1973, Traduction Francaise, 1987.