Chapter IV: Mathematical Optimization for the Discrete Problem
4.4 Special Cases
Subtracting 12๐๐
1๐1+ 1
2๐๐
2๐2from both sides, we find 0=โ1
4๐๐
1๐1+ 1 4๐๐
1๐2+ 1 4๐๐
2๐1โ 1 4๐๐
2๐2
=โ1
4(๐1โ๐2)๐(๐1โ ๐2).
This implies ๐1=๐2. Hence, the solution of (4.5) is unique.
Due to exactness, any solution๐1, . . . , ๐๐of (4.4) corresponds to an optimal solution
๐ = ๐ผ๐ ๐
๐๐ ๐๐๐
!
of (4.5) via ๐ =
๐1 . . . ๐๐
. Consequently, the solution of
(4.4) is also unique. โก
We have shown that ๐๐ โH๐ โฉ๐ต๐ for๐ > ๐.
We claim that|๐๐โ๐๐| โฅ inf๐งโH๐โฉ๐ต๐ |๐๐โ๐ง|=|๐๐โ๐๐|for๐ > ๐. To see this, consider the problem
minimize |๐๐โ๐ง|2 s.t.
๐งโ ๐๐ +๐๐ 2
2
โค
๐๐ โ๐๐ 2
2
, (๐๐ โ ๐๐) ยท (๐งโ ๐๐) โค 0.
The Lagrangian is ๐ฟ(๐ง, ๐1, ๐2)
= |๐๐โ๐ง|2+๐1
๐งโ ๐๐ +๐๐ 2
2
โ
๐๐ โ๐๐ 2
2
+๐2(๐๐ โ๐๐) ยท (๐งโ๐๐)
= |๐๐โ๐ง|2+๐1(๐๐ โ๐ง) (๐๐โ๐ง) +๐2(๐๐ โ ๐๐) ยท (๐งโ ๐๐)
= (1+๐1) |๐ง|2+
โ2๐1โ๐1(๐๐ +๐๐) +๐2(๐๐ โ๐๐)
ยท๐ง + |๐๐|2+๐1๐๐ ยท๐๐โ๐2๐๐ ยท ๐๐,
where๐1, ๐2 โฅ 0.
โ๐ง๐ฟ(๐ง, ๐1, ๐2) =๐1(โ๐๐ โ๐๐+2๐ง) +๐2(โ๐๐ +๐๐) +2๐งโ2๐๐. Itโs easy to verify that๐ง = ๐๐,๐1= 2(๐๐โ๐๐)
๐๐โ๐๐ ,๐2= 2(๐๐โ๐๐)
๐๐โ๐๐ satisfyโ๐ง๐ฟ(๐ง, ๐1, ๐2) =0.
Note that ๐ง = ๐๐ is primal feasible. Also, since ๐๐ โค ๐๐ < ๐๐, ๐1, ๐2 โฅ 0 must be dual feasible. We found a primal-dual pair satisfying the Karush-Kuhn-Tucker condition. Hence, ๐๐ is the minimizer.
Define e๐1, . . . ,
e๐๐ by e๐๐ = ๐๐ for๐ =1, . . . , ๐ โ1, and e๐๐ = ๐๐ for๐ =๐ , . . . , ๐. We claim that they satisfy
(๐๐โ e๐๐) ยท (
๐e๐ โ
๐e๐) โค0 for all๐ โ ๐ , (๐๐โ
e๐๐) ยท (โ
๐e๐) โค0 for all๐ . By construction, it suffices to check
(๐๐โ ๐๐) ยท (๐๐โ ๐๐) โค 0 for๐ < ๐ < ๐ , (๐๐ โ ๐๐) ยท (โ๐๐) โค 0 for ๐ > ๐ .
To prove the first inequality, fix๐ < ๐ < ๐. First note that ๐๐ โค ๐๐. This follows directly from (๐๐ โ ๐๐) ยท (โ๐๐) โค 0. Next, we have ๐๐ โค ๐โฒ
๐ (recall that ๐โฒ
๐ is the
first coordinate of ๐๐ โ R2). Indeed, suppose, to the contrary, ๐๐ > ๐โฒ
๐. Then ๐โฒ
๐ < ๐๐ โค ๐๐ โค ๐๐, and|๐๐ โ ๐๐|2 = |๐๐ โ ๐โฒ
๐|2+ |๐๐|2 > |๐๐ โ ๐๐|2. ๐๐ would be a closer point in ฮto ๐๐ than ๐๐, contradiction. Since๐ < ๐ < ๐, ๐๐ โค ๐โฒ
๐, ๐๐ โค ๐๐, and
(๐๐โ๐๐) ยท (๐๐โ ๐๐) =(๐๐โ๐๐) (๐๐โ ๐โฒ
๐) โค 0. We already know
(๐๐ โ๐๐) ยท (๐๐โ๐๐) โค0, so adding the two inequalities gives
(๐๐ โ๐๐) ยท (๐๐โ๐๐) โค0.
For the second inequality, fix ๐ > ๐. We know that (๐๐ โ ๐๐) ยท (โ๐๐) = (๐๐ โ ๐โฒ
๐) (โ๐โฒ
๐) + |๐๐|2 โค 0. From this, we deduce that ๐โฒ
๐ โฅ 0 (if ๐โฒ
๐ < 0, then from (๐๐ โ๐โฒ
๐) (โ๐โฒ
๐) โค0 we find๐๐โ ๐โฒ
๐ โค0, so 0 โค ๐๐ โค ๐โฒ
๐ < 0, contradiction). Since ๐๐ โฅ ๐๐ and๐โฒ
๐ โฅ 0,
(๐๐ โ๐๐) ยท (โ๐๐) โค0. We already know
(๐๐โ ๐๐) ยท (โ๐๐) โค0, so adding the two inequalities gives
(๐๐ โ๐๐) ยท (โ๐๐) โค0.
Now define ๐
1, . . . , ๐
๐ as follows: ๐
๐ =
๐e๐ = ๐๐ for๐ < ๐, ๐
๐ = ๐โฒ
๐
0
!
for๐ โฅ ๐. Itโs easy to verify that they satisfy
(๐๐โ ๐
๐) ยท (๐
๐ โ๐
๐) โค0 for all๐ โ ๐ , (๐๐โ ๐
๐) ยท (โ๐
๐) โค0 for all๐ .
By Proposition 8, there exists a convex region ฮ โ R๐ containing 0 such that ๐๐ = ๐
ฮ(๐๐) for๐ =1, . . . , ๐. Moreover,
๐
โ๏ธ
๐=1
๐ค๐|๐๐โ ๐๐|2โฅ
๐โ1
โ๏ธ
๐=1
๐ค๐|๐๐โ๐๐|2+
๐
โ๏ธ
๐=๐
๐ค๐|๐๐โ๐๐|2
>
๐
โ๏ธ
๐=1
๐ค๐|๐๐โ ๐
๐|2.
It follows that the convex setฮwe started with cannot be optimal. โก
The situation covered by Proposition 17 is one-dimensional. We next study a special case that goes beyond one dimension. We show that when ๐1, . . . , ๐๐ are linearly independent, (4.5) is exact. It results from the convenient structure of the dual problem (4.12). We will use the following lemma:
Lemma 18. Suppose ๐ฆโ
๐ ๐, ๐งโ
๐, ๐โ are optimal for the dual problem (4.12). Let๐โ = ๐ถโร
๐โ ๐๐ฆโ
๐ ๐๐ด๐ ๐ โร๐
๐=1๐งโ
๐๐ต๐โ ๐ 0
0 0
!
. If rank๐โ โฅ ๐, then (4.5) is exact.
Proof. Suppose ๐โ is optimal for (4.5). Then ๐โ โข ๐โ = 0. Since ๐โ, ๐โ are both positive semidefinite, we have the matrix product ๐โ๐โ = 0. Since๐โ, ๐โ are (๐+ ๐) ร (๐ +๐), rank ๐โ+rank๐โ โค ๐+ ๐. Since rank๐โ โฅ ๐, it must be that rank๐โ โค ๐. By Proposition 10, Problem (4.5) is exact. โก With Lemma 18, it suffices to show that rank ๐โ โฅ ๐ whenever we wish to show that the SDP relaxation is exact. We now present the special case of interest.
Proposition 19.Suppose๐1, . . . , ๐๐are linearly independent. Then for any๐ค1, . . . , ๐ค๐ >
0, problem (4.5) with parameters (๐ , ๐ค)is exact.
Proof. Suppose ๐ฆโ
๐ ๐, ๐งโ
๐, ๐โ are optimal for (4.12). Let ๐โ = ๐ถ โ ร
๐โ ๐ ๐ฆโ
๐ ๐๐ด๐ ๐ โ ร๐
๐=1๐งโ
๐๐ต๐โ ๐ 0
0 0
!
. Then๐โtakes the form
๐โ =
ยฉ
ยญ
ยญ
ยญ
ยญ
ยญ
ยญ
ยญ
ยญ
ยซ ร๐
๐=1๐ค๐
๐๐๐๐
๐
4 โ๐โ ๐1 ๐2 . . . ๐๐ ๐๐
1 ๐11 ๐12 . . . ๐1๐ ๐๐
2 ๐21 ๐22 . . . ๐2๐ ..
.
.. .
.. .
... .. . ๐๐
๐ ๐๐1 ๐๐2 . . . ๐๐ ๐ ยช
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฌ ,
where
๐๐ =โ๐ค๐ ๐๐
2 +โ๏ธ
๐โ ๐
๐ฆโ
๐ ๐
๐๐ 2 +๐งโ
๐
๐๐ 2 โโ๏ธ
๐โ ๐
๐ฆโ
๐ ๐
๐๐
2, ๐=1, . . . , ๐ , ๐๐๐ =๐ค๐โโ๏ธ
๐โ ๐
๐ฆโ
๐ ๐ โ๐งโ
๐, ๐=1, . . . , ๐ , ๐๐ ๐ =โ1
2๐ฆโ
๐ ๐ โ 1 2๐ฆโ
๐ ๐, ๐ โ ๐ .
We show that ๐1, ๐2, . . . , ๐๐ are linearly independent. Write ๐๐ =
โ๐ค๐+โ๏ธ
๐โ ๐
๐ฆโ
๐ ๐ +๐งโ
๐
๐๐ 2 โโ๏ธ
๐โ ๐
๐ฆโ
๐ ๐
๐๐ 2 . Since๐1, . . . , ๐๐ are linearly independent, it suffices to show that
ยฉ
ยญ
ยญ
ยญ
ยญ
ยญ
ยญ
ยซ
โ๐ค1+ร
๐โ 1๐ฆโ
1๐ +๐งโ
1 โ๐ฆโ
12 . . . โ๐ฆโ
1๐
โ๐ฆโ
21 โ๐ค2+ร
๐โ 2๐ฆโ
2๐ +๐งโ
2 . . . โ๐ฆโ
2๐
.. .
.. .
.. .
.. .
โ๐ฆโ
๐1 โ๐ฆโ
๐2 . . . โ๐ค๐ +ร
๐โ ๐๐ฆโ
๐ ๐ +๐งโ
๐
ยช
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฌ is nonsingular. Since ๐ฆโ
๐ ๐ โค 0, ๐งโ
๐ โค 0, this matrix is strictly diagonally dominant, hence indeed nonsingular.
Having established linear independence of ๐1, . . . , ๐๐, we have shown that rank๐โ โฅ
๐. By Lemma 18, Problem (4.5) is exact. โก
This result can be partially extended to the case when๐ โฅ ๐.
Lemma 20. Let ๐ฃ(๐ , ๐ค) be the optimal value of problem (4.5) with parameters (๐ , ๐ค). Then for any pair๐ , ๐โฒโR๐ร๐,
|๐ฃ(๐ , ๐ค) โ๐ฃ(๐โฒ, ๐ค) | โค 9 4
๐
โ๏ธ
๐=1
๐ค๐[2 max{|๐๐|,|๐โฒ
๐|}|๐๐โ๐โฒ
๐| + |๐๐โ๐โฒ
๐|2]. (4.14) In particular, if๐๐ โR๐ร๐ converges to๐, thenlim๐โโ๐ฃ(๐๐, ๐ค) =๐ฃ(๐ , ๐ค).
Proof. Put ๐๐ = ๐2๐. By Theorem 15, there exists ๐ โฅ 0 and a closed convex set ฮ โ R๐+๐ such that 0 โ ฮ and ๐ฃ(๐ , ๐ค) = ร๐
๐=1๐ค๐|๐๐ โ ๐ฮ(๐๐) |2. The map ๐ฮ : R๐ โ R๐ is contractive, that is, |๐ฮ(๐ฅ) โ ๐ฮ(๐ฆ) | โค |๐ฅ โ ๐ฆ| (see [13]).
Consequently,
|๐โฒ
๐ โ ๐ฮ(๐โฒ
๐) | =|๐๐โ๐ฮ(๐๐) โ [๐๐โ ๐ฮ(๐๐)] + [๐โฒ
๐ โ ๐ฮ(๐โฒ
๐)] |
โค |๐๐โ ๐ฮ(๐๐) | + |๐๐โ๐โฒ
๐| + |๐ฮ(๐๐) โ ๐ฮ(๐โฒ
๐) |
โค |๐๐โ ๐ฮ(๐๐) | + 1
2|๐๐โ๐โฒ
๐| + |๐๐โ๐โฒ
๐|, and
|๐โฒ
๐ โ ๐ฮ(๐โฒ
๐) |2
โค |๐๐โ ๐ฮ(๐๐) |2+3|๐๐โ๐ฮ(๐๐) ||๐๐โ๐โฒ
๐| + 9
4|๐๐โ๐โฒ
๐|2
โค |๐๐โ ๐ฮ(๐๐) |2+ 9
2|๐๐||๐๐โ๐โฒ
๐| + 9
4|๐๐โ๐โฒ
๐|2.
Upon summation over๐, we find ๐ฃ(๐โฒ, ๐ค) โค
๐
โ๏ธ
๐=1
๐ค๐|๐โฒ
๐ โ ๐ฮ(๐โฒ
๐) |2
โค ๐ฃ(๐ , ๐ค) + 9 4
๐
โ๏ธ
๐=1
๐ค๐[2|๐๐||๐๐โ๐โฒ
๐| + |๐๐โ๐โฒ
๐|2].
Inequality (4.14) follows from reversing the roles of๐ and๐โฒ. โก Corollary 21. When๐ โฅ ๐, Problem (4.5) admits a solution of rank๐.
Proof. Since๐ โฅ ๐, there exists a sequence (๐๐
1, . . . , ๐๐
๐) โ (๐1, . . . , ๐๐) such that for each ๐, ๐๐
1, . . . , ๐๐
๐ are linearly independent. Let ๐ถ๐, ๐ด๐
๐ ๐, ๐ต๐
๐ be the coefficient matrices for Problem (4.5) with input (๐๐
1, . . . , ๐๐
๐).
Let ๐๐ =
ยฉ
ยญ
ยญ
ยญ
ยญ
ยญ
ยซ
๐ผ๐ ๐๐
1 . . . ๐๐
๐
(๐๐
1)๐ ๐ฆ๐
11 . . . ๐ฆ๐
1๐
.. .
.. .
... .. . (๐๐
๐)๐ ๐ฆ๐
๐1 . . . ๐ฆ๐
๐ ๐
ยช
ยฎ
ยฎ
ยฎ
ยฎ
ยฎ
ยฌ
be the optimal solution for Problem (4.5) with
input ๐๐. By Proposition 19, rank ๐๐ = ๐, and there exists a closed convex set ฮ๐ โ R๐ such that 0 โ ฮ๐, and ๐๐
๐ = ๐ฮ๐(๐๐
๐). We know that |๐๐
๐| โค |๐๐
๐|. Moreover, since rank ๐๐ = ๐, we must have |๐ฆ๐
๐ ๐| = |๐๐
๐ ยท ๐๐
๐| โค |๐๐
๐||๐๐
๐|. This analysis shows that the sequence{๐๐}โ
๐=1is bounded. After passing to a subsequence,๐๐converges to some matrix ๐. Note that ๐ is feasible for Problem (4.5) with input ๐, and that rank๐ =๐. Also, lim๐โโ๐๐ โข๐ถ๐ = ๐โข๐ถ. By Lemma 20,๐ is optimal.
โก Remark 8. This result is weaker than Proposition 19 because it does not imply uniqueness of solution for (4.5). It guarantees that an optimal solution of rank ๐ exists, but there may be additional solutions with higher rank.
Stated in terms of the fund menu problem, if there are at least as many assets as the number of types, then the manager cannot (strictly) increase aggregate fee by introducing additional independent assets with zero mean return.
BIBLIOGRAPHY
[1] Aharon Ben-Tal and Arkadi Nemirovski. Lectures on modern convex opti- mization: analysis, algorithms, and engineering applications. SIAM, 2001.
[2] Carl M Bender et al. โWhat is the optimal shape of a city?โ In: Journal of Physics A: Mathematical and General37.1 (2003), p. 147.
[3] Mats Bengtsson and Bjรถrn Ottersten. โOptimum and suboptimum transmit beamformingโ. In:Handbook of antennas in wireless communications. CRC press, 2018, pp. 18โ1.
[4] Samuel Burer and Yinyu Ye. โExact semidefinite formulations for a class of (random and non-random) nonconvex quadratic programsโ. In:Mathematical Programming181.1 (2020), pp. 1โ17.
[5] Guillaume Carlier, Ivar Ekeland, and Nizar Touzi. โOptimal derivatives de- sign for meanโvariance agents under adverse selectionโ. In:Mathematics and Financial Economics1.1 (2007), pp. 57โ80.
[6] Jakลกa Cvitaniฤ and Julien Hugonnier. โOptimal fund menusโ. In:Mathemat- ical Finance32.2 (2022), pp. 455โ516.
[7] Georg Faber. โBeweis, dass unter allen homogenen Membranen von gleicher Flรคche und gleicher Spannung die kreisfรถrmige den tiefsten Grundton gibtโ.
In: (1923).
[8] Michel X Goemans and David P Williamson. โImproved approximation al- gorithms for maximum cut and satisfiability problems using semidefinite programmingโ. In:Journal of the ACM (JACM)42.6 (1995), pp. 1115โ1145.
[9] Sunyoung Kim and Masakazu Kojima. โExact solutions of some noncon- vex quadratic optimization problems via SDP and SOCP relaxationsโ. In:
Computational optimization and applications26.2 (2003), pp. 143โ154.
[10] Antoine Lemenant and Edoardo Mainini. โOn convex sets that minimize the average distanceโ. In: ESAIM: Control, Optimisation and Calculus of Variations18.4 (2012), pp. 1049โ1072.
[11] Michael Mussa and Sherwin Rosen. โMonopoly and product qualityโ. In:
Journal of Economic theory18.2 (1978), pp. 301โ317.
[12] Jean-Charles Rochet and Philippe Chonรฉ. โIroning, sweeping, and multidi- mensional screeningโ. In:Econometrica(1998), pp. 783โ826.
[13] Rolf Schneider. Convex bodies: the BrunnโMinkowski theory. 151. Cam- bridge university press, 2014.
[14] Anthony Man-Cho So and Yinyu Ye. โTheory of semidefinite program- ming for sensor network localizationโ. In:Mathematical Programming109.2 (2007), pp. 367โ384.
[15] VA Yakubovich. โS-procedure in nonlinear control theoryโ. In: Vestnick Leningrad Univ. Math.4 (1997), pp. 73โ93.
[16] Fuzhen Zhang.The Schur complement and its applications. Vol. 4. Springer Science & Business Media, 2006.
A p p e n d i x A
GAMMA CONVERGENCE
We provide justification for focusing on the finite case of minimizing ร๐
๐=1๐ค๐
๐๐
2 โ ๐ฮ(๐๐)
2.
Consider a bounded regionฮฉ โ R๐and a finite measure๐. We may discretize๐by partitioning R๐ into dyadic cubes and forming a weighted sum of point-masses at the centers. Formally, letD๐be the collection of dyadic cubes inR๐ of length 2โ๐. For each๐๐ โ D๐, let๐๐be the center of๐๐. Define
๐๐:= โ๏ธ
๐๐โD๐
๐(ฮฉโฉ๐๐)๐ฟ๐
๐ (A.1)
Let C be the collection of closed convex subsets of R๐ containing the origin, and let F๐(ฮ) := โซ
๐
2 โ ๐ฮ(๐)
2
๐๐๐. Loosely speaking, as the dyadic cube partition gets finer, the discretized problem minimizeฮโCF๐(ฮ) โapproximatesโ the target problem minimizeฮโCF (ฮ)better. The precise formulation of such approximation (often known as gamma-convergence in calculus of variations) is given by the following result:
Proposition 22. Suppose ฮฉ โ R๐ is bounded, and ๐ is a finite measure on ฮฉ.
Define ๐๐ as in (A.1). Then there exists a sequence of minimizers {ฮ๐} for prob- lems minimizeฮโCF๐(ฮ) having an accumulation point inฮ๐ป. Moreover, any such accumulation point is an optimal solution of minimizeฮโCF (ฮ).
Lemma 23. Suppose {ฮ๐} is a sequence in C such that ฮ๐ โ ฮ in ฮ๐ป. Then F๐(ฮ๐) โ F (ฮ).
Proof. |F๐(ฮ๐) โ F (ฮ) | โค |F๐(ฮ๐) โ F (ฮ๐) | + |F (ฮ๐) โ F (ฮ) |. By Proposition 6, the second term on the right tends to 0 as๐โ โ.
Now consider the first term. On each dyadic cube ๐๐, โซ
๐๐
๐ 2 โ ๐ฮ
๐(๐)
2
๐๐๐ =
โซ
๐๐
๐๐
2 โ๐ฮ
๐(๐๐)
2๐๐. Since|๐ฮ
๐(๐) | โค |๐|, and|๐ฮ
๐(๐๐) โ ๐ฮ
๐(๐) | โค |๐๐โ๐|,
โซ
๐๐
๐๐ 2 โ๐ฮ
๐(๐๐)
2
๐๐โ
โซ
๐๐
๐ 2 โ๐ฮ
๐(๐)
2
๐๐ โฒ
โซ
๐๐
(|๐๐| + |๐|) |๐๐โ๐|๐๐.
By the dominated convergence theorem, as ๐ โ โ, ร
๐๐โD๐
โซ
๐๐
(|๐๐| + |๐|) |๐๐ โ
๐|๐๐โ0. โก
Proof of Proposition 22. Sinceฮฉis bounded, all๐๐are supported in a large enough ball, and we may assume thatฮ๐are contained in this ball. By the Blaschke selection theorem, we may assume that {ฮ๐} contains a subsequence that converges in the Hausdorff distance. Supposeฮis any accumulation point of {ฮ๐}. By Lemma 23, F๐(ฮ๐) โ F (ฮ). Moreover, ifฮโฒโ C, then
F (ฮโฒ) =limF๐(ฮโฒ) โฅlimF๐(ฮ๐) =F (ฮ).
Consequently,ฮis a minimizer ofF (ยท). โก
A p p e n d i x B
NUMERICAL RESULTS
We examine the numerical solutions of Problem (4.5) in several cases of interest.
Each case consists of points๐1, . . . , ๐๐inR2with uniform weight๐ค1=ยท ยท ยท=๐ค๐ =1.
We plot the types ๐1, . . . , ๐๐ (blue), the optimal convex set ฮ (shaded region), as well as the projection points๐ฮ(๐1), . . . , ๐ฮ(๐๐).
In addition to the plots, we will be interested in the numerical value of๐๐+1(๐โ)โthe (๐+1)-st smallest eigenvalue of the optimal dual slack matrix
๐โ =๐ถโโ๏ธ
๐โ ๐
๐ฆโ
๐ ๐๐ด๐ ๐ โ
๐
โ๏ธ
๐=1
๐งโ
๐๐ต๐โ ๐โ 0
0 0
!
for Problem (4.12). A strictly positive value implies rank๐โ โฅ ๐. By Lemma (18), it allows us to verify, a posteriori, that the SDP numerical solution is exact for the complete problem (4.4).
The numerical solution is implemented with Python packageCVXPY.