www.elsevier.com/locate/dsw
Minmax regret solutions for minimax optimization problems
with uncertainty
Igor Averbakh
∗Mathematics Department, MS-9063, Western Washington University, Bellingham, Washington 98225, USA
Received 1 June 1999; received in revised form 1 January 2000
Abstract
We propose a general approach for nding minmax regret solutions for a class of combinatorial optimization problems with an objective function of minimax type and uncertain objective function coecients. The approach is based on reducing a problem with uncertainty to a number of problems without uncertainty. The method is illustrated on bottleneck combinatorial optimization problems, minimax multifacility location problems and maximum weighted tardiness scheduling problems with uncertainty. c 2000 Published by Elsevier Science B.V. All rights reserved.
Keywords:Optimization under uncertainty; Polynomial algorithms; Minmax regret optimization
1. Introduction
Combinatorial optimization problems with uncer-tainty in input data have attracted signicant research eorts because of their importance for practice. Two ways of modeling uncertainty are usually used: the stochastic approach and the worst-case approach.
In the stochastic approach, uncertainty is modelled by means of assuming some probability distribution over the space of all possible scenarios (where a sce-nario is a specic realization of all parameters of the problem), and the objective is to nd a solution with good probabilistic performance. Models of this type are handled using stochastic programming techniques [15].
∗Corresponding author. Fax: +1-360-650-7788.
E-mail address:averbakh@cc.wwu.edu (I. Averbakh).
In the worst-case approach, the set of possible sce-narios is described deterministically, and the objective is to nd a solution that performs reasonably well for all scenarios, i.e. that has the best “worst-case” perfor-mance and hedges against the most hostile scenario. The performance of a feasible solution under a specic scenario may be either the objective function value, or the absolute (or relative) deviation of the objective function value from the best possible one under this scenario (“regret”, or “opportunity loss”). Solutions that optimize the worst-case performance are called in [18]robustsolutions; we will use this terminology, although there are also other robustness concepts in the literature.
Uncertainty in parameters of a model can have dif-ferent origins in dierent application areas. It may be due to either incomplete information, or uctuations inherent for the system, or unpredictable changes in
the future; when quantitative parameters have a sub-jective nature (e.g. priorities of customers in loca-tion models or priorities of jobs in scheduling mod-els), it may be convenient for a decision maker to specify intervals instead of specic values for param-eters and adjust this input interactively, to get more intuition about the model. We refer the reader to [18] for a discussion of sources of uncertainty in optimiza-tion models.
Minmax regret optimization models have received increasing attention over the last decade, and by now it is a well-established area of research. A comprehen-sive treatment of the state of art in minmax regret dis-crete optimization by 1997 and extensive references can be found in the book [18]. We also refer the reader to the book [18] for a comprehensive discussion of the motivation for the minmax regret approach and its advantages; in the paper, we discuss only technical issues.
To put minmax regret optimization in a broader con-text, we note that concepts similar to minmax regret have been used in areas such as competitive analysis of online algorithms, machine learning, and information theory. In the area of competitive analysis of online algorithms, problems are considered in which infor-mation arrives one piece at a time, and decisions must be made based only on input received so far without knowing what will occur in the future. In these prob-lems one typically looks at the “competitive ratio” of an algorithm, which is the worst-case ratio of the cost of the solution obtained by the algorithm to the cost of the optimal solution (in hindsight) given the se-quence of events that occurred. For a recent textbook on competitive analysis of online algorithms, see [5]. In the area of machine learning, one often attempts to bound the dierence between the loss of a learning algorithm and the loss of the best solution in hind-sight from some set of candidates determined in ad-vance (see, e.g., [7]). In information theory, one uses minmax regret and regret ratios to study notions such as universal prediction, universal portfolios, etc. (see, e.g., [11]).
We focus on nding minmax regret solutions for a class of combinatorial optimization problems with a minimax type of objective function and interval struc-ture of uncertainty (i.e. each parameter can take on any value from the corresponding interval of uncer-tainty). The class of problems that we consider is
quite general, it includes many combinatorial opti-mization problems with minimax objective. The main idea of our approach is to reduce a problem with uncertainty to a number of problems without uncer-tainty; then, an ecient algorithm for the problem without uncertainty can be used to obtain an ecient algorithm for the problem with uncertainty. The ap-proach is illustrated on such examples as bottleneck combinatorial optimization problems, minimax multi-facility location problems, maximum weighted tardi-ness scheduling problems with uncertainty. Low-order polynomial complexity bounds are given for the min-max regret versions of the bottleneck spanning tree problem, the maximum capacity path problem, bottle-neck Steiner subnetwork problems withk-connectivity constraints (k63).
The paper is organized as follows. In Section 2, we study the problems in the most general form. In Section 3, we specialize the approach to bottleneck combinatorial optimization problems. In Sections 4 and 5, we discuss how to apply the approach for minimax multifacility location problems and maximum weighted tardiness scheduling problems, respectively.
2. Notation and general framework
LetAbe a set (solution space); elements ofAwill be calledfeasible solutions. Suppose thatmfunctions
fi(X); i∈[1 :m] are dened on A; we assume that
fi(X)¿0 for anyX∈Aandi∈[1 :m]. A vectorS= {wS
i; i∈[1 :m]} ofmpositive real numbers will be
called ascenarioand will represent an assignment of weightswS
i to functionsfi(X). When the choice of a
scenario is clear from the context or is irrelevant, we drop the superscriptS and denote the corresponding weight justwi. For a scenarioSand a feasible solution
X∈A, let us deneF(S; X) = maxi∈[1:m]{wSifi(X)}.
Consider the following generic minimax optimization problem:
Problem OPT(S).Minimize{F(S; X)|X∈A}. Let F∗(S) denote the optimal objective function
value for Problem OPT(S) (we assume thatAis either a nite set or a compact metric space, and in the latter case functionsfi(X) are assumed to be continuous,
Most combinatorial optimization problems with minimax objective functions can be represented as special cases of Problem OPT(S). In this paper, we consider in detail the following four examples: bottleneck combinatorial optimization problems, the distance-constrained multicenter location problem with mutual communication, the weighted p-center location problem, the maximum weighted tardiness scheduling problem.
Everywhere in the paper, for notational conve-nience, for any ratioa=bof nonnegative numbers we assume thata=b= +∞ when b= 0 and a 6= 0, and
a=b= 0 whena= 0 andb= 0.
For anX∈A, the valueF(S; X)−F∗(S) is called
theabsolute regret for the feasible solutionX under scenarioS, and the value [F(S; X)−F∗
(S)]=[F∗
(S)] is called therelative regretforX under scenarioS.
Suppose that a setSSof possible scenarios is xed. For anyX1; X2∈A, let us dene valuesREGR1(X1; X2) andREGR2(X1; X2) as
REGR1(X1; X2) = max
S∈SS(F(S; X1)−F(S; X2)); (1)
REGR2(X1; X2) = max S∈SS
F(S; X1)−F(S; X2)
F(S; X2)
: (2)
For anX∈A, let us dene theworst-case absolute regretZ1(X) and theworst-case relative regretZ2(X) as follows:
Z1(X) = max
S∈SS{F(S; X)−F
∗
(S)}; (3)
Z2(X) = max S∈SS
F(S; X)−F∗
(S)
F∗(S)
: (4)
Alternative ways to representZ1(X) andZ2(X) are
Z1(X) = max
S∈SS Xmax′∈A{F(S; X)−F(S; X
′
)}; (5)
Z2(X) = max S∈SS Xmax′∈A
F(S; X)−F(S; X′
)
F(S; X′)
; (6)
Z1(X) = max
X′∈AREGR1(X; X
′
); (7)
Z2(X) = max
X′∈AREGR2(X; X
′
): (8)
An optimal solution to the right-hand side of (3) (of (4)) is called an absolute (relative) worst-case sce-nario for X. An optimal solution to the right-hand
side of (7) (of (8)) is called anabsolute (relative)
worst-case alternative for X. Also, we dene the worst-case objective function value forX as
Z0(X) = max
S∈SS F(S; X): (9)
The following problems are considered in the paper:
Problem ROB-0.FindX∈Athat minimizesZ0(X).
Problem ROB-1.FindX∈Athat minimizesZ1(X).
Problem ROB-2.FindX∈Athat minimizesZ2(X). Following [18], we call Problems ROB-0, ROB-1, and ROB-2robust versionsof Problem OPT(S).
We say that two optimization problems are equiva-lent if their sets of optimal solutions and their optimal objective function values are equal.
We assume that the set of scenariosSShas the fol-lowing structure: for each weightwi, a lower bound
w−
i and an upper boundw+i are given,w+i ¿w
−
i ¿0,
and the weight wi can take on any value in the
cor-responding interval of uncertainty[w−
i ; w+i ].
There-fore,SS is the Cartesian product of the corresponding intervals of uncertainty [w−
i ; wi+]; i∈[1 :m].
Observation 1. To solve Problem ROB-0, it is suf-cient to solve Problem OPT(S+), where S+ is the scenario assigning all weightswi equal to the
corre-sponding upper boundsw+i .
According to Observation 1, Problem ROB-0 can be solved straightforwardly with the same order of complexity as Problem OPT(S). In the remainder of the paper, we focus on Problems ROB-1 and ROB-2 (“minmax regret” problems).
Notice that Problem OPT(S) is a special case of Problems ROB-1 and ROB-2 (corresponding to setSS
consisting of a single scenario). To get a better intu-ition about the minmax regret problems, the following interpretation is useful. For an ¿0 and a scenario
S, anX∈Ais called an-optimal solution to Problem OPT(S) ifF(S; X)−F∗(
S)6. LetX(S) denote the
set of all-optimal solutions to Problem OPT(S). It is reasonable to look for a solution that is-optimal (for a given ¿0) for all possible scenarios, that is, to look for anX∈T
S∈SS X(S). For some values of
such a solution exists, but for some (smaller) values of
scenarios. Then, the solutionX∗
obtained by solving Problem ROB-1 will be -optimal for all scenarios
S∈SS for any ¿Z1(X∗); also, for any ¡ Z1(X∗) we haveT
S∈SS X(S) =∅, and for any¿Z1(X
∗
) we haveT
S∈SS X(S)6=∅. So, valueZ1(X
∗
) has the in-terpretation of the minimum possiblesuch that there exists a solution-optimal for Problem OPT(S) for all scenariosS∈SS; this value can be used as a measure of uncertainty. Problem ROB-2 has a similar inter-pretation if we change the denition of-optimality and call an X∈A an -optimal solution to Problem OPT(S) if [F(S; X)−F∗(S)]=[F∗(S)]6.
Let Si denote the scenario where wi =wi+ and
wj=w−j for allj6=i, i.e. scenarioSiassigns weightwi
equal to the corresponding upper bound and all other weights equal to the corresponding lower bounds. For simplicity of presentation of results for Problem ROB-2, we assumeF∗
(Si)¿0 for alli∈[1 :m] (this
assumption can be ignored in the context of Problem ROB-1).
For anyX∈A, let
h1i(X) = max{wi+fi(X)−F∗(Si);0}; (10)
h2i(X) =w +
i fi(X)−F∗(Si)
F∗(S
i)
: (11)
Consider the following problems:
Problem B1.Minimize{maxi∈[1 :m]h1i(X)|X∈A}.
Problem B2.Minimize{maxi∈[1 :m]h2i(X)|X∈A}.
Theorem 1.
(a)For anyX∈A; Z1(X) = maxi∈[1 :m]h1i(X);
(b)For anyX∈A; Z2(X) = maxi∈[1 :m]h2i(X).
Proof. See the appendix.
Corollary 1. ProblemsB1andB2 are equivalent to ProblemsROB-1andROB-2;respectively.
Corollary 2. For anyX∈A;there is an absolute( rel-ative)worst-case scenario forXthat belongs toSS′=
{Si; i∈[1 :m]}.Therefore; the sets of optimal solu-tions for ProblemsROB-1andROB-2will not change if we replaceSS withSS′
.
Proof. See the proof of Theorem 1 in the appendix.
Let i=w+i =F
∗(
Si); i∈[1 :m]. Let ˆS be the
sce-nario that assigns weightsi to functionsfi(X), i.e.
ˆ
S={i; i∈[1 :m]}.
Corollary 3. The set of optimal solutions to Problem
OPT( ˆS) is equal to the set of optimal solutions to ProblemROB-2; i.e. to solve Problem ROB-2 it is sucient to solve ProblemOPT( ˆS).
Proof. It is sucient to notice thath2
i(X) =ifi(X) −1, and to use Theorem 1(b).
According to Corollary 3, it is possible to solve Problem ROB-2 with complexity O(m·(complexity of Problem OPT(S))), by solving m problems OPT(Si); i∈[1 :m] to obtain valuesF∗(Si); i∈[1 :
m] and Problem OPT( ˆS). As we will see later, in many cases it is possible to avoid solving all problems OPT(Si); i∈[1 :m], using the structure of a specic
combinatorial problem at hand.
Corollary 3 and Observation 1 show that it is possi-ble to obtain ecient algorithms for Propossi-blems ROB-0 and ROB-2 immediately from an ecient algorithm for Problem OPT(S). This is not necessarily the case for Problem ROB-1, since, in general, Problem B1 may not have the same structure as Problem OPT(S). However, we will see that quite often Problem B1 can be reformulated in terms of Problem OPT(S), or an algorithm for Problem OPT(S) can be modied to handle Problem B1 using the structure of the specic combinatorial optimization problem under considera-tion.
3. Robust bottleneck combinatorial optimization problems
LetE={e1; : : : ; em}be a nite set (called “ground
set”), andA⊂2Ebe a set offeasible subsetsofE. Let
us dene functionsfi(X); i∈[1 :m] onAas follows:
for anX∈A,
fi(X) =
0 if ei6∈X;
1 if ei∈X:
(12)
For any scenario S = (wS
1; : : : ; wSm), Problem
Problem BOTTLE(S). Problems ROB-0, ROB-1 and ROB-2 in this setting will be referred to as Problems ROBBOT-0, ROBBOT-1 and ROBBOT-2, respectively. Problem BOTTLE(S) is the generic bottleneck combinatorial optimization problem and has the following interpretation: elements ei of the
ground set E have weights wS
i, and it is required
to nd a feasible subset that minimizes the weight of its heaviest element. For example, E can be the set of edges of a network; then if A is the set of paths from a specied source node to a specied sink node, Problem BOTTLE(S) is the bottleneck path problem [25]; ifA is the set of spanning trees, Problem BOTTLE(S) is the bottleneck spanning tree problem [6].
Consider Problem ROBBOT-1. According to Corollary 1 from Theorem 1, this problem can be replaced with Problem B1. Notice that in our case, according to (10) and (12)
h1i(X) = max{w+i fi(X)−F∗(Si);0}
= max{(wi+−F∗
(Si));0} ·fi(X);
i∈[1 :m]. Therefore, Problem B1 is nothing but Prob-lem BOTTLE(S′) where scenario
S′ is dened by
assigning weightswi= max{(w+i −F
∗(S
i));0}to
ele-mentsei; i∈[1 :m]. Thus, Problems ROBBOT-1 and
ROBBOT-2 can be solved by the following
Algorithm 1. Begin
Step1. Obtain valuesF∗
(Si); i∈[1 :m]. Step2. Solve Problem BOTTLE(S′
) and Problem BOTTLE( ˆS), where scenarioS′
is dened above, and scenario ˆS was dened in the previous section. End
An optimal solution for Problem BOTTLE(S′
) (Problem BOTTLE( ˆS)) is an optimal solution for Problem ROBBOT-1 (Problem ROBBOT-2). If Step 1 of Algorithm 1 is implemented by solvingm prob-lems BOTTLE(Si),i∈[1 :m], then the complexity of
Algorithm 1 is O(m·(complexity of solving Problem BOTTLE(S)). However, it is not necessary to solve all problems BOTTLE(Si); i∈[1 :m]. Step 1 of
Al-gorithm 1 can be implemented using the following procedure.
Procedure 1. LetS0 denote the scenario where each
ei∈Ehas the weight equal to the corresponding lower
boundw−
i .
Begin
1. Solve Problem BOTTLE(S0) and obtain an optimal solutionX0and the valueF∗
(S0).
2. For each ei∈E such that ei6∈X0, set F∗(Si) =
F∗
(S0).
3. For each ei∈X0 such that wi+6F
∗
(S0), set
F∗
(Si) =F∗(S0).
4. For each ei∈X0 such that w+i ¿ F
∗(
S0), solve Problem BOTTLE(Si) to obtain valueF∗(Si).
End
In Procedure 1, Problem BOTTLE(S) is solved at most |X0| + 1 times; therefore, the complexity of Procedure 1 is O(m+|X0|·(complexity of Problem BOTTLE(S))), which is also the (improved) com-plexity of Algorithm 1.
Let us state the complexity of Algorithm 1 for sev-eral specic bottleneck combinatorial optimization problems.
(a) Robust bottleneck spanning tree problems. Set E is the set of edges of an undirected network
G= (V; E); |V|=n; |E|=m; A={X∈2E|edges in X comprise a spanning tree of G}. Since Prob-lem BOTTLE(S) can be solved in O(m) time in this case [6] and|X0|6n−1, we obtain O(nm) order of complexity for Algorithm 1.
(b)Robust bottleneck path problem. SetEis as in the previous example;A={X∈2E|edges inX
com-prise a path from a given source nodesto a given sink node t}. Since Problem BOTTLE(S) can be solved in O(m) time in this case [25] and |X0|6n−1, we obtain O(nm) order of complexity for Algorithm 1. Bottleneck path problems (also known as maximum capacity path problems) have applications in reliabil-ity theory [20], bicriteria optimization [14] and service routing [4].
(c) Robust bottleneck Steiner subnetwork prob-lems with k-connectivity constraints. Set E is as in the previous examples. A connected network is called
k-edge-connected,k¿1, if it is connected and delet-ing anyk−1 or less edges (without their endpoints) does not disconnect it. A connected network is called
a subnetwork which connects a given set T⊂V of terminal nodes and which is k-node-connected (or
k-edge-connected)}. Bottleneck Steiner subnetwork problems have applications in telecommunication sys-tems and VLSI;k-connectivity requirements arise in connection with reliability issues, where it is neces-sary to take into account the possibility of failure of several edges or nodes. Fork= 2, the problem with-out uncertainty (i.e. Problem BOTTLE(S)) has been studied in [23,8,26]; for a generalk¿1 the problem has been studied in [1]. Ifk63, Problem BOTTLE(S) can be solved in O(m+nlogn) time (O(m) time if
k= 1) obtaining a solution of cardinality not greater thank(n−1) [26,1]; therefore,|X0|6k(n−1), and we obtain O(nm+n2logn) order of complexity for Algorithm 1 (O(nm) ifk= 1).
4. Minmax regret minimax multifacility location problems
Minmax regret approach was rst applied to a lo-cation problem by Kouvelis et al. [17] (see also [18]); they obtained polynomial algorithms for the minmax regret 1-median problem on a tree with uncertainty in weights of nodes. For the same problem on a tree, Chen and Lin [9] and subsequently Averbakh and Berman [2] developed algorithms with improved or-ders of complexity; Averbakh and Berman [2] also presented a polynomial algorithm for the problem on a general network. Vairaktarakis and Kouvelis [28] considered the 1-median problem on a tree with minmax-regret objective where both uncertainty and dynamic evolution of node demands and transporta-tion costs are present. Labbe et al. [19] study sensitiv-ity analysis issues for the 1-median problem on a tree. In this section, we apply the general methodology of Section 2 to minmax-regret multicenter location models.
Let G = (V; E) be a transportation network with
V=(v1; : : : ; vn) the set of nodes andEthe set of edges.
Gwill also denote the set of all points of the network. For anya; b∈G; d(a; b) denotes the shortest distance betweenaandb. Nodes of the network represent cus-tomers; the customers need service from facilities that should be located on the network. It is required to locatep facilities, 16p ¡ n. Background informa-tion on facilities locainforma-tion theory can be found in [22].
4.1. Minmax regret multicenter problems with mutual communication
In this model, the facilities are assumed to be distin-guishable, i.e. they provide dierent types of service. A customer may need service from several (or all) fa-cilities, and the facilities themselves may need service from each other. Given arenp+12p(p−1) numbers
qij; rtk; i∈[1 :n]; j∈[1 :p]; 16t ¡ k6p; qij is a
nonnegative upper bound on the distance between the customer located atviand facilityj; rtk is a
nonneg-ative upper bound on the distance between facilities
t and k. A vectorX = (x1; : : : ; xp)∈Gp is called a location vector;kth componentxkofX represents the
location forkth facility. A location vectorX is called afeasible location vectorif it satises the constraints
d(vi; xj)6qij; i∈[1 :n]; j∈[1 :p]; (13)
d(xt; xk)6rtk; 16t ¡ k6p: (14)
Let A be the set of all feasible location vectors. A scenario S represents an assignment of interaction weights for all “customer-facility” and “facility– facility” pairs. Specically, a scenario S is a set of the followingm=np+12p(p−1) weights:
wSij– interaction weight between customervi and
fa-cilityj;
uS
tk– interaction weight between facilitiestandk,
i∈[1 :n]; j∈[1 :p]; 16t¡k6p. Dene m func-tions f(vi; j)(X); f(t; k)(X) as follows: f(vi; j)(X) = d(vi; xj); f(t; k)(X) = d(xt; xk). Problem OPT(S)
in this setting is the problem of nding a feasi-ble location vector that minimizes the maximum of weighted distances between customers and fa-cilities and between pairs of fafa-cilities; it is known as the Distance-Constrained Multicenter Problem with Mutual Communication [12,10] and will be re-ferred to as Problem MMC(S). Problems ROB-1 and ROB-2 in this setting will be referred to as Problems ROBMMC-1 and ROBMMC-2, respectively.
We assume that upper bounds w+ij; u+tk and lower bounds w−
ij; u
−
tk are given for interaction weights
wS
ij; uStk, i.e.SSis the Cartesian product ofmintervals
of uncertainty [w−
ij; wij+]; [u
−
tk; u +
tk]. Let S(vi; j) denote the scenario where weight wij is equal to the
LetS(t; k)denote the scenario where weightutkis equal
to the upper boundu+tkand all other interaction weights are equal to the corresponding lower bounds.
We will use the following Distance-Constrained Feasibility Problem that has been extensively studied in the literature [13,27]:
Problem DCF.FindX= (x1; : : : ; xp)∈Gpthat
satis-es the constraints
d(vi; xj)6c′ij; i∈[1 :n]; j∈[1 :p];
d(xt; xk)6c′′tk; 16t ¡ k6p;
wherec′
ij; c
′′
tk are nonnegative constants.
Consider the feasibility version of Problem ROBMMC-1.
Problem C1.For a given¿0, ndX∈Asuch that
Z1(X)6.
According to Theorem 1 and its corollaries, Prob-lem C1 can be reformulated as follows:
Problem D1.For a given¿0, ndX∈Asatisfying constraints
w+ijd(vi; xj)−F∗(S(vi; j))6;
i∈[1 :n]; j∈[1 :p]; (15)
u+tkd(xj; xk)−F∗(S(t; k))6; 16t¡k6p: (16)
Let∗
be the smallest value ofsuch that Problem D1 is feasible;∗
is the optimal objective function value for Problem ROBMMC-1. Given value∗, an optimal
solution for Problem ROBMMC-1 can be found by solving Problem D1 with=∗. Problem D1 is feasible
for any¿∗and infeasible for any ¡ ∗. Problem
D1 is equivalent to Problem DCF with
c′
ij= min (
qij;
1
wij+(+F ∗
(S(vi; j)))
)
;
c′′
tk= min
rtk;
1
u+tk(+F ∗
(S(t; k)))
:
The feasibility version of Problem MMC(S) is also an instance of Problem DCF [12].
Problem DCF is NP-hard in the general case [16], but has a number of important polynomially solvable special cases; for example, if networkGis a tree [13], or if the interaction between the facilities has a special
structure [10,27]. Problem MMC(S) is usually solved by converting an algorithm for its feasibility version (Problem DCF) into an algorithm for the optimization version [12]. The same approach can be used for Prob-lem ROBMMC-1. If all valuesF∗
(S(vi;j)); F ∗
(S(t; k)) are known, a polynomial combinatorial algorithm that solves Problem DCF (and, therefore, Problem D1) can be converted to a polynomial algorithm for problem ROBMMC-1 using binary search over with ratio-nal representation of data or using Meggido’s para-metric approach [21,12]; the methodology of such conversions (that use either binary search or the para-metric approach) has been developed in [12], where implementation details are given in the context of problems on trees (in [12], a strongly polynomial al-gorithm for Problem DCF on a tree from [13] is used to obtain polynomial and strongly polynomial algo-rithms for Problem MMC(S) on trees). If the same conversion technique is used for Problems MMC(S) and ROBMMC-1, we obtain an algorithm for solv-ing Problem ROBMMC-1 with computational eort roughly equivalent to solvingnp+ 12p(p−1) + 1 Problems MMC(S) (includingnp+1
2p(p−1) prob-lems to obtain valuesF∗(S
(vi;j)); F
∗(S
(t; k))). Problem ROBMMC-2 can be solved with the same order of complexity, according to Corollary 3 from Theorem 1. So, whenever Problem MMC(S) can be solved in polynomial time, we obtain polynomial algorithms for Problems ROBMMC-1 and ROBMMC-2 (for exam-ple, whenGis a tree).
4.2. Minmax regret weightedp-center problems
In this model, it is assumed that the facilities are identical, and each customer is served by the closest facility. Let setAbe the set of allp-element subsets ofG; X∈Arepresents chosen locations for the facil-ities and is called alocation variant. Letm=n. De-ne functionsfi(X); i∈[1 :m] asfi(X) =d(vi; X),
whered(vi; X) = min{d(vi; x)|x∈X}is the distance
between nodevi and the closest point ofX. For any
scenario S = (wS
1; : : : ; wSm), Problem OPT(S) in this
setting will be referred to as Problem PC(S). Prob-lem PC(S) is the classical weightedp-center problem; weightswS
i; i∈[1 :m] can be interpreted as priorities
of nodes (customers)vi; i∈[1 :m]. Problems ROB-1
Problem ROBPC-1 has been studied in [3]. The approach of [3] can be viewed as an application of the general methodology suggested in the current pa-per: reducing a problem with uncertainty to a similar problem without uncertainty using Theorem 1 and its Corollary 1. Namely, in [3] Problem ROBPC-1 is re-duced to Problem PC(S) on an auxiliary networkG′
, which is obtained from the original networkGby ap-pending a “dummy” edge to each node. The auxiliary networkG′
preserves the main structural properties of the original networkG(e.g. if Gis a tree, so isG′);
see details in [3].
Problem ROBPC-2 is not studied in [3]; according to Corollary 3 from Theorem 1, it can be solved with the order of complexity O(m·(complexity of Prob-lem PC(S))), which is polynomial whenever Problem PC(S) is polynomially solvable (e.g. whenGis a tree, or whenGis a general network andp= 1).
5. Minmax regret maximum weighted tardiness scheduling problems
Suppose thatmjobs have to be processed on a ma-chine; the machine cannot process more than one job at any time. Letdi; pidenote the due date and the
pro-cessing time of jobi, respectively. Processing of the jobs starts at time 0. LetAbe the set of all permuta-tions of{1;2; : : : ; m}; anX∈Arepresents a sequence of processing the jobs. If there are precedence con-straints for the jobs, thenAis the set of permutations satisfying the precedence constraints; all the subse-quent results hold also for the case of precedence con-straints. We assume that the jobs are processed accord-ing to a nondelay schedule, that is, between time 0 and the completion of all jobs the machine is never idle. Therefore, a sequenceX∈Auniquely denes the com-pletion timesCi=Ci(X) of all the jobsi∈[1 :m]. The tardinessof jobiis dened asTi(X) = max{Ci(X)−
di;0}. If we dene fi(X) =Ti(X); i∈[1 :m] then
for any scenarioS = (wS1; : : : ; wSm), Problem OPT(S)
in this setting is the maximum weighted tardiness scheduling problem and will be referred to as Prob-lem WT(S). Problems ROB-1 and ROB-2 in this set-ting will be referred to as Problems ROBWT-1 and ROBWT-2, respectively. SincewS
ifi(X) is a
nonde-creasing function ofCi, Problem WT(S) can be solved
in O(m2) time using the backward dynamic
program-ming algorithm from [24, p. 33] (even with precedence constraints).
Consider Problem ROBWT-1. According to Corol-lary 1 from Theorem 1, this problem can be replaced with Problem B1. Functionsh1
i(X) are nondecreasing
functions ofCi; therefore, given valuesF∗(Si); i∈[1 :
m], problem B1 can also be solved in O(m2) time using the backward dynamic programming algorithm from [24, p. 33]. Since for anyi∈[1 :m] valueF∗
(Si)
can be obtained in O(m2) time using the same algo-rithm, we obtain that Problem B1 (and therefore Prob-lem ROBWT-1) can be solved in O(m3) time. With Corollary 3 from Theorem 1, we obtain that Prob-lem ROBWT-2 can also be solved in O(m3) time. The results hold without any changes if instead of
Ti(X) we consider any other nondecreasing functions
ofCi.
For an analysis of a number of other minmax regret scheduling problems, see [18].
Appendix
Proof of Theorem 1. First, let us prove part (a). Consider an arbitrary X∈A. Let S′ be an
abso-lute worst-case scenario for X, and let X′ be an
optimal solution for Problem OPT(S′
). Let j∈ solute worst-case scenario forX. Therefore, scenario
Sj is also an absolute worst-case scenario for X; X′
is an optimal solution for Problem OPT(Sj), and
Z1(X) =F(Sj; X)−F∗(Sj) =w+jfj(X)−F∗(Sj) =
h1j(X). Since h1i(X)6Z1(X) for any i∈[1 :m], part (a) of the theorem is proven.
Let us now prove part (b). Consider an arbitrary
X∈A. LetS′′ be the relative worst-case scenario for X, and let X′′ be an optimal solution for Problem
wS′′
j fj(X′′) andwS ′′
j ¿0). Therefore, value [F(S
′′ ; X)−
F(S′′ ; X′′
)]=[F(S′′ ; X′′
)] cannot decrease if in sce-narioS′′
we increasewS′′
j and decrease wS ′′ i for any
i6=j. It cannot increase either, sinceS′′
is a relative worst-case scenario for X. Therefore, scenario Sj is
also a relative worst-case scenario for X, X′′
is an optimal solution for Problem OPT(Sj), and
Z2(X) =
F(Sj; X)−F∗(Sj)
F∗(S
j)
=w
+
jfj(X)−F∗(Sj)
F∗(S
j)
=h2j(X):
Since h2i(X)6Z2(X) for any i∈[1 : m], part (b) of the theorem is proven.
References
[1] I. Averbakh, O. Berman, Bottleneck Steiner subnetwork problems with k-connectivity constraints, INFORMS J. Comput. 8 (4) (1996) pp. 361–366.
[2] I. Averbakh, O. Berman, Minmax regret median location on a network under uncertainty, INFORMS J. Comput., to appear. [3] I. Averbakh, O. Berman, Minmax regret p-center location on a network with demand uncertainty, Working paper, University of Toronto, Faculty of Management, Toronto, 1996.
[4] O. Berman, G.Y. Handler, Optimal minimax path of a single service unit on a network to nonservice destinations, Transp. Sci. 21 (1987) 115–122.
[5] A. Borodin, R. El-Yaniv, Online Computation and Competi-tive Analysis, Cambridge University Press, Cambridge, 1998. [6] P.M. Camerini, The minimax spanning tree problem and some
extensions, Inform. Process. Lett. 7 (1978) 10–14. [7] N. Cesa-Bianchi, Y. Freund, D. Helmbold, D. Haussler,
R. Schapire, M. Warmuth, How to use expert advice, J. Assoc. Comput. Mach. 44 (3) (1997) 427– 485.
[8] M.S. Chang, C.Y. Tang, R.C.T. Lee, Solving Euclidean bottleneck biconnected edge subgraph problem by 2-relative neighborhood graphs, Discrete Appl. Math. 39 (1992) 1–12. [9] B. Chen, C. Lin, Robust one-median location problem,
Networks 31 (1998) 93–103.
[10] D. Chhajed, T. Lowe,m-median andm-center problems with mutual communication: solvable special cases, Oper. Res. 40 (1992) S56–S66.
[11] T.M. Cover, Universal portfolios, Math. Finance 1 (1) (1991) 1–29.
[12] E. Erkut, R.L. Francis, A. Tamir, Distance constrained multifacility minimax location problems on tree networks, Networks 22 (1992) 37–54.
[13] R.L. Francis, T.J. Lowe, H.D. Ratli, Distance constraints for tree network multifacility location problems, Oper. Res. 26 (1978) 571–596.
[14] P. Hansen, Bicriterion Path Problems, Lecture Notes in Economics and Mathematical Systems, Vol. 177, Springer, Berlin, 1980, pp. 109 –127.
[15] P. Kall, S. Wallace, Stochastic Programming, Wiley, New York, 1994.
[16] A. Kolen, Tree network and planar location theory, Center for Mathematics and Computer Science, P.O.Box 4079, 1009 AB Amsterdam, The Netherlands, 1986.
[17] P. Kouvelis, G. Vairaktarakis, G. Yu, Robust 1-median location on a tree in the presence of demand and transportation cost uncertainty, Working paper 93=94-3-4, Department of Management Science and Information Systems, Graduate School of Business, The University of Texas at Austin. [18] P. Kouvelis, G. Yu, Robust Discrete Optimization and Its
Applications, Kluwer Academic Publishers, Boston, 1997. [19] M. Labbe, J.-F. Thisse, R.E. Wendell, Sensitivity analysis in
minisum facility location problems, Oper. Res. 39 (6) (1991) 961–969.
[20] E.L. Lawler, Combinatorial Optimization: Networks and Matroids, Holt, Rinehart and Winston, New York, 1976. [21] N. Meggido, Combinatorial optimization with rational
objective functions, Math. Oper. Res. 4 (1979) 414–424. [22] P.B. Mirchandani, R.L. Francis (Eds.), Discrete Location
Theory, Wiley, New York, 1990.
[23] R.G. Parker, R.L. Rardin, Guaranteed performance heuristics for bottleneck traveling salesman problem, Oper. Res. Lett. 2 (1984) 269–272.
[24] M. Pinedo, Scheduling, Prentice-Hall, New Jersey, 1995. [25] A.P. Punnen, A linear time algorithm for the maximum
capacity path problem, Eur. J. Oper. Res. 53 (1991) 402–404.
[26] A.P. Punnen, K.P.K. Nair, A fast and simple algorithm for the bottleneck biconnected spanning subgraph problem, Inform. Process. Lett. 50 (1994) 283–286.
[27] B. Tansel, G. Yesilkokcen, Polynomially solvable cases of multifacility distance constraints on cyclic networks, ISOLDE-6 Proceedings 1993, pp. 269 –273.