Chapter 2: Contraction Theory
2.4 Contraction Theory for Discrete-time Systems
The results presented so far can be readily extended to those for discrete-time nonlinear systems.
2.4.I Deterministic Perturbation
Let us consider the following nonlinear system with bounded deterministic pertur- bation๐๐ :R๐รNโฆโR๐with ยฏ๐ โRโฅ0s.t. ยฏ๐ =sup๐ฅ , ๐ โฅ๐๐(๐ฅ , ๐) โฅ:
๐ฅ(๐ +1)= ๐๐(๐ฅ(๐), ๐) +๐๐(๐ฅ(๐), ๐) (2.50) where ๐ โ N, ๐ฅ : N โฆโ R๐ is the discrete system state, and ๐๐ : R๐ รN โฆโ R๐ is a smooth function. Although this thesis focuses mainly on continuous-time nonlinear systems, let us briefly discuss contraction theory for (2.50) to imply that the techniques in the subsequent chapters are applicable also to discrete-time nonlinear systems.
Let ๐0(๐) and ๐1(๐) be solution trajectories of (2.50) with ๐๐ = 0 and ๐๐ โ 0, respectively. Then a virtual system of ๐(๐, ๐) parameterized by๐ โ [0,1], which has๐(๐=0, ๐) =๐0(๐)and๐(๐=1, ๐) =๐1(๐)as its particular solutions, can be expressed as follows:
๐(๐, ๐ +1) = ๐๐(๐(๐, ๐), ๐) +๐ ๐๐(๐1(๐), ๐). (2.51) The discrete version of robust contraction in Theorem 2.4 is given in the following theorem.
Theorem 2.8. Let๐ฅ๐ =๐ฅ(๐) and๐๐ =๐(๐, ๐)for any๐ โN. If there exists a uni- formly positive definite matrix๐๐(๐ฅ๐, ๐) = ฮ๐(๐ฅ๐, ๐)โคฮ๐(๐ฅ๐, ๐) โป0, โ๐ฅ๐, ๐, where ฮ๐ defines a smooth coordinate transformation of๐ฟ๐ฅ๐, i.e.,๐ฟ ๐ง๐ = ฮ๐(๐ฅ๐, ๐)๐ฟ๐ฅ๐, s.t.
either of the following equivalent conditions holds forโ๐ผโ (0,1),โ๐ฅ๐, ๐:
ฮ๐+1(๐ฅ๐+1, ๐ +1)๐ ๐๐
๐ ๐ฅ๐
ฮ๐(๐ฅ๐, ๐)โ1
โชฏ ๐ผ (2.52)
๐ ๐๐
๐ ๐ฅ๐
โค
๐๐+1(๐ฅ๐+1, ๐ +1)๐ ๐๐
๐ ๐ฅ๐
โชฏ ๐ผ2๐๐(๐ฅ๐, ๐), (2.53)
then we have the following bound as long as we have๐I โชฏ ๐๐ฅ(๐ฅ๐, ๐) โชฏ ๐I, โ๐ฅ๐, ๐, as in(2.26):
โฅ๐1(๐) โ๐0(๐) โฅ โค ๐โ(0)
โ ๐
๐ผ๐ + ๐ยฏ(1โ๐ผ๐) 1โ๐ผ
โ๏ธ
๐ ๐
(2.54) where๐โ(๐) = โซ๐1
๐0 โฅฮ๐(๐๐, ๐)๐ฟ๐๐โฅ as in(2.22) for the unperturbed trajectory๐0, perturbed trajectory๐1, and virtual state๐๐ =๐(๐)given in(2.51).
Proof. If (2.52) or (2.53) holds, we have that ๐โ(๐+1) โค
โซ 1 0
โฅฮ๐+1(๐๐
๐๐๐(๐๐, ๐)๐๐๐๐ +๐๐(๐ฅ๐, ๐)) โฅ๐๐
โค ๐ผ
โซ 1 0
โฅฮ๐(๐๐, ๐)๐๐๐๐โฅ๐๐+๐ยฏ
โ
๐ =๐ผ๐โ(๐) +๐ยฏ
โ ๐
where ฮ๐+1 = ฮ๐+1(๐๐+1, ๐ +1), ๐๐
๐๐๐(๐๐, ๐) = ๐ ๐๐/๐ ๐๐, and ๐๐๐๐ = ๐ ๐๐/๐ ๐. Applying this inequality iteratively results in (2.54).
Theorem 2.8 can be used with Theorem 2.4 for stability analysis of hybrid nonlin- ear systems [33]โ[35], or with Theorem 2.5 for stability analysis of discrete-time stochastic nonlinear systems [6], [8], [35]. For example, it is shown in [6] that if the time interval in discretizing (2.1) as (2.50) is sufficiently small, contraction of discrete-time systems with stochastic perturbation reduces to that of continuous-time systems as follows.
2.4.II Stochastic Perturbation
Let us also present a discrete-time version of Theorem 2.5, which can be extensively used for proving the stability of discrete-time and hybrid stochastic nonlinear sys- tems, along with known results for deterministic systems [33], [34]. Consider the discrete-time nonlinear system with stochastic perturbation modeled by the stochas- tic difference equation
๐ฅ(๐ +1)= ๐๐(๐ฅ(๐), ๐) +๐บ๐(๐ฅ(๐), ๐)๐ค(๐) (2.55) where๐บ๐ :R๐รNโR๐ร๐is a matrix-valued function and๐ค(๐)is a๐-dimensional sequence of zero mean uncorrelated normalized Gaussian random variables. Con- sider the following two systems with trajectories ๐0(๐) and ๐1(๐) driven by two independent stochastic perturbation๐ค0(๐) and๐ค1(๐):
๐๐(๐ +1) = ๐๐(๐๐(๐), ๐) +๐บ๐, ๐(๐๐(๐), ๐)๐ค๐(๐), ๐ =0,1, (2.56)
Similar to (2.36), a virtual system of ๐(๐, ๐) parameterized by ๐ โ [0,1], which has๐(๐=0, ๐) =๐0(๐)and๐(๐=1, ๐) =๐1(๐)as its particular solutions, can be given as follows:
๐(๐, ๐ +1) = ๐๐(๐(๐, ๐), ๐) +๐บ๐(๐, ๐0(๐), ๐1(๐), ๐)๐ค(๐) (2.57) where๐บ๐(๐, ๐0(๐), ๐1(๐), ๐) =[(1โ๐)๐บ0, ๐(๐0(๐), ๐), ๐๐บ1, ๐(๐1(๐), ๐)]and๐ค(๐) = [๐ค0(๐)โค, ๐ค1(๐)โค]โค. The following theorem analyzes stochastic incremental stabil- ity for discrete-time nonlinear systems (2.56), which is different from [26], [35] in that the stability is studied in a differential sense and its Riemannian metric is state- and time-dependent.
Theorem 2.9. Suppose that(2.53)holds for the discrete-time deterministic system (2.56) with ๐ผ2 = 1โ ๐พ๐ and thatโ๐, ๐ โ R>0 and ๐ยฏ0๐,๐ยฏ1๐ โ Rโฅ0 s.t. ๐ ๐ผ โชฏ ๐๐(๐ฅ , ๐) โชฏ ๐ ๐ผ , โ๐ฅ , ๐, sup๐ฅ , ๐ โฅ๐บ1, ๐(๐ฅ , ๐) โฅ๐น = ๐ยฏ0๐, and sup๐ฅ , ๐ โฅ๐บ2, ๐(๐ฅ , ๐) โฅ๐น =
ยฏ
๐1๐. Suppose also that โ๐พ2 โ (0,1) s.t. ๐พ2 โค 1โ (๐/๐) (1โ๐พ๐), where ๐พ๐ is the contraction rate. Consider the generalized squared length with respect to a Riemannian metric๐๐(๐(๐, ๐), ๐)defined as
๐๐ โ(๐, ๐ฟ๐, ๐) =
โซ ๐1 ๐0
๐ฟ๐โค๐๐(๐(๐, ๐), ๐)๐ฟ๐ =
โซ 1 0
๐ ๐
๐ ๐
โค
๐๐(๐(๐, ๐), ๐)๐ ๐
๐ ๐
๐๐(2.58) s.t. ๐๐(๐, ๐ฟ๐, ๐) โฅ ๐โฅ๐1(๐) โ๐0(๐) โฅ2. Then the mean squared distance between the two trajectories of the system(2.56)is bounded as follows:
E
โฅ๐1(๐) โ๐0(๐) โฅ2
โค 1โ๐พห๐
๐
1โ๐พห๐
๐ถ๐+ ๐พห๐
๐
๐
๐ธ[๐๐ โ(0)]. (2.59)
where๐๐ โ(0) =๐๐ โ(๐(0), ๐ฟ๐(0),0), ๐ถ๐ = (๐/๐) (๐ยฏ2
0๐ +๐ยฏ2
1๐), and ๐พห๐ = 1โ๐พ2 โ (0,1).
Proof. Let ๐๐ = ๐(๐, ๐), ๐ค๐ = ๐ค(๐), ๐๐ = ๐๐ โ(๐(๐, ๐), ๐ฟ๐(๐, ๐), ๐), and ๐๐ = ๐๐(๐(๐, ๐), ๐) for any ๐ โNfor notational simplicity. Using the assumed bounds along with (2.53) (๐ผ2=1โ๐พ๐) and (2.57), we have, forโ โN, that
๐โ+1 โค ๐
โซ 1 0
๐ ๐โ
๐ ๐โ
๐ ๐โ
๐ ๐
+ ๐ ๐บโ
๐ ๐ ๐คโ
2
๐๐ (2.60)
โค ๐ ๐
(1โ๐พ๐)
โซ 1 0
๐ ๐โ
๐ ๐
โค
๐โ
๐ ๐โ
๐ ๐ ๐๐
+๐
โซ 1 0
2๐ ๐โ
๐ ๐
โค๐ ๐โ
๐ ๐โ
โค๐ ๐บโ
๐ ๐
๐คโ +๐คโค
โ
๐ ๐บโ
๐ ๐
โค๐ ๐บโ
๐ ๐ ๐คโ
๐๐.
Taking the conditional expected value of (2.60) when๐โ, ๐ฟ๐โ, andโ are given, we have that (see also: Theorem 2 of [26])
E๐โ[๐โ+1] โค๐พ๐๐โ +๐E๐โ
โซ 1 0
๐คโค
โ
๐ ๐บโ
๐ ๐
โค๐ ๐บโ
๐ ๐ ๐คโ๐๐
โค ๐พ๐๐โ + โ๏ธ
๐=1,2
๐E๐โ h
Tr
๐ค๐,โ๐คโค
๐,โ๐บโค
๐,โ๐บ๐,โ i
โค ๐พ๐๐โ +๐
โ๏ธ
๐=1,2
Tr ๐บโค
๐,โ๐บ๐,โ
โค ๐พห๐๐โ +๐๐ถ๐, (2.61) where๐พ๐ =๐/๐(1โ๐พ๐), and๐โ,๐ฟ๐โ, andโ are denoted as๐โ. Here, we used the condition: โ๐พ2 โ (0,1) s.t. ๐พ๐ โค 1โ๐พ2 = ๐พห๐. Taking expectation over ๐โโ1 in (2.61) with the tower ruleE๐โโ1[๐โ+1] =E๐โโ1[E๐โ[๐โ+1]]gives us that
E๐โโ1[๐โ+1] โค๐พห2
๐๐โโ1+๐๐ถ๐+๐๐ถ๐๐พห๐
where ห๐พ๐ is defined as ห๐พ๐ = 1โ ๐พ2. Continuing this operation with the relation ๐E๐0
โฅ๐1,โ+1โ๐2,โ+1โฅ2
โค E๐0 [๐โ+1]yields E๐0
โฅ๐1, ๐ โ๐2, ๐โฅ2
โ ๐พห๐
๐
๐
๐0 โค ๐ถ๐
๐โ1
โ๏ธ
๐=0
ห ๐พ๐
๐ = 1โ๐พห๐
๐
1โ๐พห๐ ๐ถ๐
where ๐ = โ + 1. Taking expectation over ๐0 and rearranging terms result in (2.59).
2.4.III Connection between Continuous and Discrete Stochastic Contraction Theory
Let us now consider the case where the time intervalฮ๐ก =๐ก๐+1โ๐ก๐for discretization is sufficiently small, i.e.,ฮ๐ก โซ (ฮ๐ก)2. Then the continuous-time stochastic system (2.29) can be discretized as
๐ฅ(๐ +1)=๐ฅ(๐) +
โซ ๐ก๐+1
๐ก๐
๐(๐ฅ(๐ก), ๐ก)๐ ๐ก+๐บ(๐ฅ(๐ก), ๐ก)๐๐ฒ(๐ก)
=๐ฅ(๐) + ๐(๐ฅ(๐), ๐ก๐)ฮ๐ก+๐บ(๐ฅ(๐), ๐ก๐)ฮ๐ฒ(๐) + O ฮ๐ก2
where๐ฅ(๐)=๐ฅ(๐ก๐),ฮ๐ฒ(๐) =โ
ฮ๐ก ๐ค(๐), and๐ค(๐)is a๐-dimensional sequence of zero mean uncorrelated normalized Gaussian random variables. Whenฮ๐ก โซ (ฮ๐ก)2, ๐๐(๐ฅ(๐), ๐)and๐บ๐(๐ฅ(๐), ๐)in (2.55) can be approximated as ๐๐(๐ฅ(๐), ๐) โ ๐ฅ(๐) + ๐(๐ฅ(๐), ๐ก๐)ฮ๐ก and ๐บ๐(๐ฅ(๐), ๐) โ
โ
ฮ๐ก ๐บ(๐ฅ(๐), ๐ก๐). In this situation, we have the following theorem that connects the stochastic incremental stability of discrete-time systems with that of continuous-time systems.
Theorem 2.10. Suppose that(2.61)in Theorem 2.9 holds with๐พห๐ =1โ๐พ2 โ (0,1). Then the expected value of ๐๐+1 up to first order in ฮ๐ก is given as E๐๐[๐๐+1] = ๐๐ +ฮ๐กโ๐๐, where๐๐ = ๐๐ โ(๐(๐, ๐), ๐ฟ๐(๐, ๐), ๐) for ๐๐ โ of (2.58) and โ is the infinitesimal differential generator. Furthermore, the following inequality holds:
โ๐๐ โ(๐๐, ๐ฟ๐๐, ๐ก๐) โค โ๐พ2 ฮ๐ก
๐๐ โ(๐,๐ฟ๐๐, ๐ก๐) +๐๐ถห๐ (2.62) where๐๐ =๐(๐, ๐)๐ถห๐is a positive constant given as
ห ๐ถ๐ = ๐ถ๐
ฮ๐ก
= ๐ ๐ฮ๐ก
(๐ยฏ2
0๐+๐ยฏ2
1๐) = ๐ ๐
(๐ยฏ2
0+๐ยฏ2
1) (2.63)
with๐ยฏ0and๐ยฏ1defined in Theorem 2.5.
Proof. Let๐๐ =๐๐(๐(๐, ๐), ๐). ๐๐+1up to first order inฮ๐ก is written as ๐๐+1= ๐ ๐๐
๐ ๐ก๐ ฮ๐ก+
๐
โ๏ธ
๐=1
๐ ๐๐
๐(๐๐)๐
(๐๐, ๐ฮ๐ก+๐บ๐, ๐ฮ๐ฒ๐)๐ (2.64) + 1
2
๐
โ๏ธ
๐=1 ๐
โ๏ธ
๐=1
๐2๐๐
๐(๐๐)๐๐(๐๐)๐
(๐บ๐, ๐ฮ๐ฒ๐)๐(๐บ๐, ๐ฮ๐ฒ๐)๐ +๐๐ + O ฮ๐ก2
where ๐๐, ๐ and ๐บ๐, ๐ are defined as ๐๐, ๐ = ๐(๐๐, ๐ก๐) and ๐บ๐, ๐ = ๐บ(๐๐, ๐ก๐) for notational simplicity. The subscripts๐and ๐ denote the corresponding vectorsโ๐th and ๐th elements. Similarly, ๐ ๐๐+1/๐ ๐up to first order inฮ๐ก can be computed as
๐ ๐๐+1
๐ ๐
= ๐ ๐๐
๐ ๐
+ ๐ ๐๐, ๐
๐ ๐๐
๐ ๐๐
๐ ๐
ฮ๐ก+ ๐ ๐บ๐, ๐
๐ ๐
ฮ๐ฒ๐+ O ฮ๐ก2
. (2.65)
Substituting (2.64) and (2.65) intoE๐๐[๐๐+1] yields E๐๐[๐๐+1] =E๐๐
โซ 1 0
๐ ๐๐+1
๐ ๐
โค
๐๐+1
๐ ๐๐+1
๐ ๐ ๐๐
=๐๐+ (๐๐๐ , ๐ +๐๐๐ , ๐)ฮ๐ก+ O (ฮ๐ก3/2) where๐๐๐ , ๐ and๐๐๐ , ๐ are given by
๐๐๐ , ๐ =
โซ 1 0
๐ ๐๐
๐ ๐
โค
๐ ๐๐, ๐
๐ ๐๐
โค
๐๐+ ยค๐๐+๐๐
๐ ๐๐, ๐
๐ ๐๐ ๐ ๐๐
๐ ๐ ๐๐
with๐ยค๐ =๐ ๐๐/๐ ๐ก๐ +ร๐
๐=1(๐ ๐๐/๐(๐๐)๐)๐๐, ๐ and ๐๐๐ , ๐ =
โซ 1 0
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
๐
โ๏ธ
๐=1 ๐
โ๏ธ
๐=1
(๐๐)๐ ๐
๐ ๐บ๐, ๐
๐ ๐
๐ ๐บ๐, ๐
๐ ๐
โค
๐ ๐
+2๐(๐๐)๐
๐(๐๐)๐
๐ ๐๐
๐ ๐
๐บ๐, ๐
๐ ๐บ๐, ๐
๐ ๐
โค
๐ ๐
+1 2
๐ ๐๐
๐ ๐
โค ๐2๐๐
๐(๐๐)๐๐(๐๐)๐
๐ ๐๐
๐ ๐
(๐บ๐, ๐๐บโค
๐, ๐)๐ ๐
๐๐.
We note that the properties of๐ค(๐) as a๐-dimensional sequence of zero mean un- correlated normalized Gaussian random variables are used to derive these relations.
Since๐๐๐ , ๐ +๐๐๐ , ๐ = โ๐๐ whereโ is the infinitesimal differential generator, we haveE๐๐[๐๐+1] =๐๐+ฮ๐กโ๐๐. Thus, the conditionE๐๐[๐๐+1] โค (1โ๐พ2)๐๐+๐๐ถ๐ given by (2.61) in Theorem 2.9 reduces to the following inequality:
โ๐๐(๐๐, ๐๐๐๐, ๐ก๐) โค โ๐พ2 ฮ๐ก
๐๐(๐๐, ๐๐๐๐, ๐ก๐) +๐ ๐ถ๐
ฮ๐ก
. (2.66)
Finally, (2.66) with the relations ห๐ถ๐=๐ถ๐/ฮ๐กand๐บ๐(๐๐, ๐)=
โ
ฮ๐ก ๐บ(๐๐, ๐ก๐)results in (2.62) and (2.63).
For example, in practical control applications, we use the same control input at ๐ก = ๐ก๐ for a finite time interval๐ก โ [๐ก๐, ๐ก๐ก+1). Theorems 2.5 and 2.10 indicate that if ฮ๐ก is sufficiently small, a discrete-time stochastic controller can be viewed as a continuous-time counterpart with contraction rate 2๐พ1 = ๐พ2/ฮ๐ก. We will illustrate how to select the sampling periodฮ๐ก large enough without deteriorating the control performance as demonstrated in [6].
We finally remark that the steady-state upper bounds of (2.27) in Theorem 2.4, (2.39) in Theorem 2.5, and (2.54) in Theorem 2.8 are all functions of๐/๐. This property is to be used extensively in Chapter 4 for designing a convex optimization-based control and estimation synthesis algorithm via contraction theory.
References
[1] W. Lohmiller and J.-J. E. Slotine, โOn contraction analysis for nonlinear systems,โAutomatica, vol. 34, no. 6, pp. 683โ696, 1998, issn: 0005-1098.
[2] J.-J. E. Slotine, โModular stability tools for distributed computation and control,โInt. J. Adapt. Control Signal Process., vol. 17, no. 6, pp. 397โ416, 2003.
[3] W. Wang and J.-J. E. Slotine, โOn partial contraction analysis for coupled nonlinear oscillators,โ Biol. Cybern., vol. 92, no. 1, pp. 38โ53, Jan. 2005, issn: 0340-1200.
[4] Q. Pham, N. Tabareau, and J.-J. E. Slotine, โA contraction theory approach to stochastic incremental stability,โ IEEE Trans. Autom. Control, vol. 54, no. 4, pp. 816โ820, Apr. 2009.
[5] S.-J. Chung, S. Bandyopadhyay, I. Chang, and F. Y. Hadaegh, โPhase syn- chronization control of complex networks of Lagrangian systems on adaptive digraphs,โAutomatica, vol. 49, no. 5, pp. 1148โ1161, 2013.
[6] H. Tsukamoto and S.-J. Chung, โRobust controller design for stochastic nonlinear systems via convex optimization,โ IEEE Trans. Autom. Control, vol. 66, no. 10, pp. 4731โ4746, 2021.
[7] A. P. Dani, S.-J. Chung, and S. Hutchinson, โObserver design for stochastic nonlinear systems via contraction-based incremental stability,โIEEE Trans.
Autom. Control, vol. 60, no. 3, pp. 700โ714, Mar. 2015.
[8] H. Tsukamoto and S.-J. Chung, โConvex optimization-based controller de- sign for stochastic nonlinear systems using contraction analysis,โ in IEEE Conf. Decis. Control, Dec. 2019, pp. 8196โ8203.
[9] H. K. Khalil,Nonlinear Systems, 3rd. Upper Saddle River, NJ: Prentice-Hall, 2002.
[10] D. E. Kirk,Optimal Control Theory: An Introduction. Dover Publications, Apr. 2004, isbn: 0486434842.
[11] D. Angeli, โA Lyapunov approach to incremental stability properties,โIEEE Trans. Autom. Control, vol. 47, no. 3, pp. 410โ421, Mar. 2002.
[12] J. Jouffroy and J.-J. E. Slotine, โMethodological remarks on contraction theory,โ inIEEE Conf. Decis. Control, vol. 3, Dec. 2004, pp. 2537โ2543.
[13] W. J. Rugh, Linear Systems Theory. USA: Prentice-Hall, Inc., 1996, isbn:
0134412052.
[14] J.-J. E. Slotine and W. Li,Applied Nonlinear Control. Upper Saddle River, NJ: Pearson, 1991.
[15] H. Robbins and S. Monro, โA stochastic approximation method,โAnn. Math.
Statist., vol. 22, no. 3, pp. 400โ407, 1951.
[16] P. M. Wensing and J.-J. E. Slotine, โBeyond convexity โ Contraction and global convergence of gradient descent,โ PLOS ONE, vol. 15, pp. 1โ29, Aug. 2020.
[17] S.-I. Amari, โNatural gradient works efficiently in learning,โ Neural Com- put., vol. 10, no. 2, pp. 251โ276, 1998.
[18] C. Udriste, Convex functions and optimization methods on Riemannian manifolds. Springer Science & Business Media, 1994, vol. 297.
[19] H. Tsukamoto and S.-J. Chung, โNeural contraction metrics for robust esti- mation and control: A convex optimization approach,โ IEEE Control Syst.
Lett., vol. 5, no. 1, pp. 211โ216, 2021.
[20] H. Tsukamoto, S.-J. Chung, and J.-J. E. Slotine, โNeural stochastic contrac- tion metrics for learning-based control and estimation,โIEEE Control Syst.
Lett., vol. 5, no. 5, pp. 1825โ1830, 2021.
[21] I. R. Manchester and J.-J. E. Slotine, โControl contraction metrics: Convex and intrinsic criteria for nonlinear feedback design,โ IEEE Trans. Autom.
Control, vol. 62, no. 6, pp. 3046โ3053, Jun. 2017.
[22] R. A. Horn and C. R. Johnson,Matrix Analysis, 2nd. Cambridge University Press, 2012, isbn: 0521548233.
[23] S. Singh, A. Majumdar, J.-J. E. Slotine, and M. Pavone, โRobust online motion planning via contraction theory and convex optimization,โ inIEEE Int. Conf. Robot. Automat., May 2017, pp. 5883โ5890.
[24] L. Arnold,Stochastic Differential Equations: Theory and Applications. Wi- ley, 1974.
[25] H. J. Kushner, Stochastic Stability and Control, English. Academic Press New York, 1967, xiv, 161 p.
[26] T.-J. Tarn and Y. Rasis, โObservers for nonlinear stochastic systems,โIEEE Trans. Autom. Control, vol. 21, no. 4, pp. 441โ448, Aug. 1976.
[27] M. Zakai, โOn the ultimate boundedness of moments associated with so- lutions of stochastic differential equations,โSIAM J. Control, vol. 5, no. 4, pp. 588โ593, 1967.
[28] D. S. Geoffrey R. Grimmett,Probability and Random Processes, 3rd. United Kingdom: Oxford University Press, 2001.
[29] E. Mazumdar, T. Westenbroek, M. I. Jordan, and S. Shankar Sastry, โHigh confidence sets for trajectories of stochastic time-varying nonlinear sys- tems,โ inIEEE Conf. Decis. Control, 2020, pp. 4275โ4280.
[30] S. Han and S.-J. Chung,Incremental nonlinear stability analysis for stochas- tic systems perturbed by Lรฉvy noise, arXiv:2103.13338, Mar. 2021.
[31] J.-J. E. Slotine and W. Lohmiller, โModularity, evolution, and the binding problem: A view from stability theory,โNeural Netw., vol. 14, no. 2, pp. 137โ
145, 2001.
[32] S.-J. Chung and J.-J. E. Slotine, โCooperative robot control and concurrent synchronization of Lagrangian systems,โIEEE Trans. Robot., vol. 25, no. 3, pp. 686โ700, Jun. 2009.
[33] J.-J. E. Slotine, W. Wang, and K. El Rifai, โContraction analysis of synchro- nization in networks of nonlinearly coupled oscillators,โ inInt. Symp. Math.
Theory Netw. Syst., Jul. 2004.
[34] W. Lohmiller and J.-J. E. Slotine, โNonlinear process control using contrac- tion theory,โAIChE Journal, vol. 46, pp. 588โ596, Mar. 2000.
[35] Q. Pham, โAnalysis of discrete and hybrid stochastic systems by nonlinear contraction theory,โ inInt. Conf. Control Automat. Robot. Vision, Dec. 2008, pp. 1054โ1059.
C h a p t e r 3
ROBUST NONLINEAR CONTROL AND ESTIMATION VIA CONTRACTION THEORY
[1] H. Tsukamoto and S.-J. Chung, โRobust controller design for stochastic nonlinear systems via convex optimization,โ IEEE Trans. Autom. Control, vol. 66, no. 10, pp. 4731โ4746, 2021.
[2] H. Tsukamoto and S.-J. Chung, โConvex optimization-based controller de- sign for stochastic nonlinear systems using contraction analysis,โ in IEEE Conf. Decis. Control, Dec. 2019, pp. 8196โ8203.
As shown in Theorem 2.4 for deterministic disturbance and in Theorem 2.5 for stochastic disturbance, contraction theory provides explicit bounds on the distance of any couple of perturbed system trajectories. This property is useful in design- ing robust and optimal feedback controllers for a nonlinear system such as Hโ control [1]โ[11], which attempts to minimize the systemL2gain for optimal distur- bance attenuation.
Most of such feedback control and estimation schemes are, however, based on the assumption that we know a Lyapunov function candidate. This chapter thus delineates one approach to solve a nonlinear optimal feedback control problem via contraction theory [12], [13], thereby proposing one explicit way to construct a Lyapunov function and contraction metric for general nonlinear systems for the sake of robustness. This approach is also utilizable for optimal state estimation problems as shall be seen in Chapter 4.
We consider the following smooth nonlinear system, perturbed by bounded deter- ministic disturbances ๐๐(๐ฅ , ๐ก) with sup๐ฅ ,๐กโฅ๐๐(๐ฅ , ๐ก) โฅ = ๐ยฏ๐ โ Rโฅ0 or by Gaussian white noise, driven by a Wiener process๐ฒ(๐ก) with sup๐ฅ ,๐กโฅ๐บ๐(๐ฅ , ๐ก) โฅ๐น =๐ยฏ๐ โRโฅ0:
ยค
๐ฅ = ๐(๐ฅ , ๐ก) +๐ต(๐ฅ , ๐ก)๐ข+๐๐(๐ฅ , ๐ก) (3.1)
๐๐ฅ = (๐(๐ฅ , ๐ก) +๐ต(๐ฅ , ๐ก)๐ข)๐ ๐ก+๐บ๐(๐ฅ , ๐ก)๐๐ฒ(๐ก) (3.2)
ยค
๐ฅ๐ = ๐(๐ฅ๐, ๐ก) +๐ต(๐ฅ๐, ๐ก)๐ข๐ (3.3)
where ๐ฅ : Rโฅ0 โฆโ R๐ is the system state, ๐ข โ R๐ is the system control input, ๐ : R๐ร Rโฅ0 โฆโ R๐ and ๐ต : R๐ รRโฅ0 โฆโ R๐ร๐ are known smooth functions,
๐๐ :R๐รRโฅ0 โฆโR๐ and๐บ๐ : R๐รRโฅ0โฆโ R๐ร๐ค are unknown bounded functions for external disturbances, and๐ฒ : Rโฅ0 โฆโR๐คis a๐ค-dimensional Wiener process.
Also, for (3.3),๐ฅ๐ : Rโฅ0 โฆโ R๐ and๐ข๐ : Rโฅ0 โฆโ R๐ denote the desired target state and control input trajectories, respectively.
Remark 3.1. We consider control-affine nonlinear systems(3.1)โ(3.3)in Chapter 3, 4, and 6 โ 8.1. This is primarily because the controller design techniques for control-affine nonlinear systems are less complicated than those for control non- affine systems (which often result in๐ขgiven implicitly by๐ข =๐(๐ฅ , ๐ข, ๐ก) [14], [15]), but still utilizable even for the latter, e.g., by treating๐ขยคas another control input (see Example 3.1), or by solving the implicit equation ๐ข = ๐(๐ฅ , ๐ข, ๐ก) iteratively with a discrete-time controller (see Example 3.2 and Remark 3.3).
Example 3.1. By using๐ขยคinstead of๐ขin(3.1)and(3.2), a control non-affine system,
ยค
๐ฅ = ๐(๐ฅ , ๐ข, ๐ก), can be rewritten as ๐
๐ ๐ก
"
๐ฅ ๐ข
#
=
"
๐(๐ฅ , ๐ข, ๐ก) 0
# +
"
0 I
#
ยค ๐ข
which can be viewed as a control-affine nonlinear system with the state [๐ฅโค, ๐ขโค]โค and control๐ขยค.
Example 3.2. One drawback of the technique in Example 3.1 is that we have to control ๐ขยค instead of ๐ข, which could be difficult in practice. In this case, we can utilize the following control non-affine nonlinear system decomposed into control- affine and non-affine parts:
ยค
๐ฅ = ๐(๐ฅ , ๐ข, ๐ก) = ๐๐(๐ฅ , ๐ก) +๐ต๐(๐ฅ , ๐ก)๐ข+๐(๐ฅ , ๐ข, ๐ก)
where ๐(๐ฅ , ๐ข, ๐ก) = ๐(๐ฅ , ๐ข, ๐ก) โ ๐๐(๐ฅ , ๐ก) โ ๐ต๐(๐ฅ , ๐ก)๐ข. The controller ๐ข can now be designed implicitly as
๐ต๐(๐ฅ , ๐ก)๐ข= ๐ต๐(๐ฅ , ๐ก)๐ขโโ๐(๐ฅ , ๐ข, ๐ก) (3.4) where ๐ขโ is a stabilizing controller for the control-affine system ๐ฅยค = ๐๐(๐ฅ , ๐ก) + ๐ต๐(๐ฅ , ๐ก)๐ขโ. Since solving such an implicit equation in (3.4) in real-time could be unrealistic in practice, we will derive a learning-based approach to solve it iteratively for unknown ๐(๐ฅ , ๐ข, ๐ก), without deteriorating its stability performance (see Lemma 8.2 and Theorem 8.4 of Chapter 8).
3.1 Overview of Nonlinear Control and Estimation
We briefly summarize the advantages and disadvantages of existing nonlinear feed- back control and state estimation schemes, so that one can identify which strategy is appropriate for their study and refer to the relevant parts of this thesis.
Table 3.1: Comparison between the SDC and CCM formulation (note that๐พ(๐ = 0, ๐ก)=๐ฅ๐and๐พ(๐=1, ๐ก) =๐ฅ).
SDC (Theorem 4.2) [12], [13],
[16]โ[18] CCM (Theorem 4.6) [19], [20]
Control law ๐ข = ๐ข๐ โ๐พ(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (๐ฅโ๐ฅ๐)
or๐ข๐โ๐พ(๐ฅ , ๐ก) (๐ฅโ๐ฅ๐) ๐ข=๐ข๐+โซ1
0 ๐(๐พ(๐, ๐ก), ๐๐๐พ(๐, ๐ก), ๐ข, ๐ก)๐๐ Computation Evaluates๐พ(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)for given
(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)as in LTV systems
Computes geodesics๐พfor given(๐ฅ , ๐ฅ๐, ๐ก) and integrates๐
Generality Captures nonlinearity by (multi-
ple) SDC matrices Handles general differential dynamics Contraction Depends on(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)or(๐ฅ , ๐ก)
(partial contraction) Depends on(๐ฅ , ๐ก)(contraction)
3.1.I Systems with Known Lyapunov Functions
As discussed in Sec. 1.2, there are several nonlinear systems equipped with a known contraction metric/Lyapunov function, such as Lagrangian systems [21, p. 392], whose inertia matrixH (q) defines its contraction metric (see Example 2.6), or the nonlinear SLAM problem [18], [22] with virtual synthetic measurements, which can be reduced to an LTV estimation problem [22]. Once we have a contraction metric/Lyapunov function, stabilizing control and estimation laws can be easily derived by using, e.g., [23]โ[25]. Thus, those dealing primarily with such nonlinear systems should skip this chapter and proceed to Part II of this thesis (Chapter 5 โ 8) on learning-based and data-driven control using contraction theory. Note that these known contraction metrics are not necessarily optimal, and the techniques to be derived in Chapter 3 and Chapter 4 are for obtaining contraction metrics with an optimal disturbance attenuation property [12], [13].
3.1.II Linearization of Nonlinear Systems
If a contraction metric of a given nonlinear system is unknown, we could linearize it to apply methodologies inspired by LTV systems theory such asHโcontrol [6]โ[11], iterative Linear Quadratic Regulator (iLQR) [26], [27], or Extended Kalman Filter (EKF). Their stability is typically analyzed by decomposing ๐(๐ฅ , ๐ก) as ๐(๐ฅ , ๐ก) =
๐ด๐ฅ+ (๐(๐ฅ , ๐ก) โ๐ด๐ฅ)assuming that the nonlinear part ๐(๐ฅ , ๐ก) โ๐ด๐ฅ is bounded, or by finding a local contraction region for the sake of local exponential stability as in [16], [28]. Since the decomposition ๐(๐ฅ , ๐ก) = ๐ด๐ฅ+ (๐(๐ฅ , ๐ก) โ ๐ด๐ฅ) allows applying the result of Theorem 2.4, we could exploit the techniques in Chapter 3 and Chapter 4 for providing formal robustness and optimality guarantees for the LTV systems- type approaches. For systems whose nonlinear part ๐(๐ฅ , ๐ก) โ ๐ด๐ฅ is not necessarily bounded, Sec. 8.2.II elucidates how contraction theory can be used to stabilize them with the learned dynamics for control synthesis.
3.1.III State-Dependent Coefficient (SDC) Formulation
It is shown in [12], [13], [16]โ[18] that the SDC-based control and estimation [29]โ
[32], which capture nonlinearity using a state-dependent matrix๐ด(๐ฅ , ๐ก)s.t. ๐(๐ฅ , ๐ก)= ๐ด(๐ฅ , ๐ก)๐ฅ (e.g., we have ๐ด(๐ฅ , ๐ก) = cos๐ฅ for ๐(๐ฅ , ๐ก) = ๐ฅcos๐ฅ), result in exponential boundedness of system trajectories both for deterministic and stochastic systems due to Theorems 2.4 and 2.5 [16]. Because of the extended linear form of SDC (see Table 3.1), the results to be presented in Chapter 3 โ 4 based on the SDC formulation are applicable to linearized dynamics that can be viewed as an LTV system with some modifications (see Remark 3.2).
This idea is slightly generalized in [17] to explicitly consider incremental stability with respect to a target trajectory (e.g.,๐ฅ๐ for control and๐ฅ for estimation) instead of using๐ด(๐ฅ , ๐ก)๐ฅ = ๐(๐ฅ , ๐ก). Let us derive the following lemma for this purpose [12], [13], [17], [18], [32]. Let us derive the following lemma for this purpose [12], [13], [17], [18], [32].
Lemma 3.1. Let ๐ : R๐ รRโฅ0 โฆโ R๐ and ๐ต : R๐ รRโฅ0 โฆโ R๐ร๐ be piecewise continuously differentiable functions. Then there exists a matrix-valued function ๐ด :R๐รR๐รR๐ รRโฅ0โฆโR๐ร๐s.t.,โ๐ โR๐,๐ ยฏโR๐,๐ขยฏ โR๐, and๐ก โRโฅ0, ๐ด(๐ ,๐ ,ยฏ ๐ข, ๐กยฏ )e = ๐(๐ , ๐ก) +๐ต(๐ , ๐ก)๐ขยฏโ ๐(๐ , ๐กยฏ ) โ๐ต(๐ , ๐กยฏ )๐ขยฏ
wheree=๐ โ๐ ยฏ, and one such ๐ดis given as follows:
๐ด(๐ ,๐ ,ยฏ ๐ข, ๐กยฏ ) =
โซ 1 0
๐๐ยฏ
๐ ๐
(๐ ๐ + (1โ๐)๐ ,ยฏ ๐ข, ๐กยฏ )๐๐ (3.5) where ๐ยฏ(๐ ,๐ข, ๐กยฏ ) = ๐(๐ , ๐ก) +๐ต(๐ , ๐ก)๐ขยฏ. We call๐ดan SDC matrix if it is constructed to satisfy the controllability (or observability for estimation) condition. Furthermore, the choice of ๐ด is not unique for ๐ โฅ 2, where ๐ is the number of states, and
the convex combination of such non-unique SDC matrices also verifies extended linearization as follows:
๐(๐ , ๐ก) +๐ต(๐ , ๐ก)๐ขยฏโ ๐(๐ , ๐กยฏ ) โ๐ต(๐ , ๐กยฏ )๐ขยฏ
= ๐ด(๐, ๐ ,๐ ,ยฏ ๐ข, ๐กยฏ ) (๐ โ๐ ยฏ) =
๐ ๐ด
โ๏ธ
๐=1
๐๐๐ด๐(๐ ,๐ ,ยฏ ๐ข, ๐กยฏ ) (๐ โ๐ ยฏ) (3.6) where ๐ = (๐1,ยท ยท ยท , ๐๐
๐ด), ร๐ ๐ด
๐=1๐๐ = 1, ๐๐ โฅ 0, and each ๐ด๐ satisfies the relation
ยฏ
๐(๐ ,๐ข, ๐กยฏ ) โ ๐ยฏ(๐ ,ยฏ ๐ข, ๐กยฏ ) = ๐ด๐(๐ ,๐ ,ยฏ ๐ข, ๐กยฏ ) (๐ โ๐ ยฏ).
Proof. The first statement on (3.5) follows from the integral relation given as
โซ 1 0
๐๐ยฏ ๐๐
(๐ ๐ + (1โ๐)๐ ,ยฏ ๐ข, ๐กยฏ )๐๐= ๐ยฏ(๐ ,๐ข, ๐กยฏ ) โ ๐ยฏ(๐ ,ยฏ ๐ข, ๐กยฏ ).
If there are multiple SDC matrices ๐ด๐, we clearly have ๐๐๐ด๐(๐ ,๐ ,ยฏ ๐ข, ๐กยฏ ) (๐ โ ๐ ยฏ) = ๐๐(๐ยฏ(๐ ,๐ข, ๐กยฏ ) โ ๐ยฏ(๐ ,ยฏ ๐ข, ๐กยฏ )), โ๐, and therefore, the relationร๐ ๐ด
๐=1๐๐ = 1, ๐๐ โฅ 0 gives (3.6).
Example 3.3. Let us illustrate how Lemma 3.1 can be used in practice taking the following nonlinear system as an example:
ยค
๐ฅ =[๐ฅ2,โ๐ฅ1๐ฅ2]โค+ [0,cos๐ฅ1]โค๐ข (3.7)
where๐ฅ = [๐ฅ1, ๐ฅ2]โค. If we use(๐ ,๐ ,ยฏ ๐ขยฏ) = (๐ฅ , ๐ฅ๐, ๐ข๐)in Lemma 3.1 for a given target trajectory (๐ฅ๐, ๐ข๐)that satisfies(3.7), evaluating the integral of(3.5)gives
๐ด1(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) =โ
"
0 1
๐ฅ2+๐ฅ2๐
2 โ ๐ข๐(cos๐ฅ1โcos๐ฅ๐1)
๐ฅ1โ๐ฅ๐1
๐ฅ1+๐ฅ1๐ 2
#
(3.8) due to the relation ๐๐ยฏ/๐ ๐ = 0 1
โ๐ 2 โ๐ 1
+ 0 0
โ๐ข๐sin๐ 1 0
for ๐ยฏ(๐ , ๐ข๐, ๐ก) = ๐(๐ , ๐ก) + ๐ต(๐ , ๐ก)๐ข๐, where๐ฅ๐ = [๐ฅ1๐, ๐ฅ2๐]โค. Note that we have
(cos๐ฅ1โcos๐ฅ๐1) ๐ฅ1โ๐ฅ๐1
=โsin๐ฅ1+๐ฅ1๐ 2
sinc๐ฅ1โ๐ฅ1๐ 2
and thus ๐ด(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) is defined for all ๐ฅ, ๐ฅ๐, ๐ข๐, and ๐ก. The SDC matrix (3.8) indeed verifies๐ด1(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (๐ฅโ๐ฅ๐) = ๐ยฏ(๐ฅ , ๐ก) โ ๐ยฏ(๐ฅ๐, ๐ก).
We can see that the following is also an SDC matrix of the nonlinear system(3.7):
๐ด2(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) =โ
"
0 1
๐ฅ2โ ๐ข๐(cos๐ฅ1โcos๐ฅ๐1)
๐ฅ1โ๐ฅ๐1
๐ฅ1๐
#
. (3.9)
Therefore, the convex combination of๐ด1in(3.8)and๐ด2in(3.9), ๐ด= ๐1๐ด1+๐2๐ด2 with ๐1+ ๐2=1, ๐1, ๐2 โฅ 0, is also an SDC matrix due to Lemma 3.1.
The major advantage of the formalism in Lemma 3.1 lies in its systematic connection to LTV systems based on uniform controllability and observability, adequately accounting for the nonlinear nature of underlying dynamics through๐ด(๐, ๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) for global stability, as shall be seen in Chapter 3 and Chapter 4. Since ๐ดdepends also on (๐ฅ๐, ๐ข๐) in this case unlike the original SDC matrix, we could consider contraction metrics using a positive definite matrix๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)instead of๐(๐ฅ , ๐ก) in Definition 2.3, to improve the representation power of ๐ at the expense of computational efficiency. Another interesting point is that the non-uniqueness of ๐ด in Lemma 3.1 for ๐ โฅ 2 creates additional degrees of freedom for selecting the coefficients ๐, which can also be treated as decision variables in constructing optimal contraction metrics as proposed in [12], [13], [18].
We focus mostly on the generalized SDC formulation in Chapter 3 and Chapter 4, as it yields optimal control and estimation laws with global stability [17] while keeping the analysis simple enough to be understood as in LTV systems theory.
Remark 3.2. This does not mean that contraction theory works only for the SDC parameterized nonlinear systems but implies that it can be used with the other techniques discussed in Sec. 3.1. For example, due to the extended linear form given in Table 3.1, the results to be presented in Chapter 3 and in Chapter 4 based on the SDC formulation are applicable to linearized dynamics that can be viewed as an LTV system with some modifications, regarding the dynamics modeling error term as an external disturbance as in Sec. 3.1.II. Also, the original SDC formulation with respect to a fixed point (e.g., (๐ ,๐ ,ยฏ ๐ขยฏ) = (๐ฅ ,0,0) in Lemma 3.1) can still be used to obtain contraction conditions independent of a target trajectory (๐ฅ๐, ๐ข๐) (see Theorem 3.2 for details).
3.1.IV Control Contraction Metric (CCM) Formulation
We could also consider using the partial derivative of ๐ of the dynamical system directly for control synthesis through differential state feedback๐ฟ๐ข =๐(๐ฅ , ๐ฟ๐ฅ , ๐ข, ๐ก).
This idea, formulated as the concept of a CCM [3], [14], [15], [19], [20], [33], constructs contraction metrics with global stability guarantees independently of target trajectories, achieving greater generality while requiring added computation in evaluating integrals involving minimizing geodesics. Similar to the CCM, we could design a state estimator using a general formulation based on geodesics distances between trajectories [34], [35]. These approaches are well compatible with the
convex optimization-based schemes in Chapter 4, and hence will be discussed in Sec. 4.3.
The differences between the SDC and CCM formulation are summarized in Ta- ble 3.1. Considering such trade-offs would help determine which form of the control law is the best fit when using contraction theory for nonlinear stabilization.
Remark 3.3. For control non-affine nonlinear systems, we could find ๐(๐ฅ , ๐ข, ๐ก) โ ๐(๐ฅ๐, ๐ข๐, ๐ก) = ๐ด(๐ฅ , ๐ฅ๐, ๐ข, ๐ข๐, ๐ก) (๐ฅ โ๐ฅ๐) +๐ต(๐ฅ , ๐ฅ๐, ๐ข, ๐ข๐, ๐ก) (๐ข โ๐ข๐) by Lemma 3.1 on the SDC formulation and use it in Theorem 4.2, although (3.10) has to be solved implicitly as ๐ตdepends on๐ข in this case. A similar approach for the CCM formulation can be found in [14], [15]. As discussed in Example 3.2, designing such implicit control laws will be discussed in Lemma 8.2 and Theorem 8.4 of Sec. 8.2.II.
3.2 LMI Conditions for Contraction Metrics
We design a nonlinear feedback tracking control law parameterized by a matrix- valued function๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (or๐(๐ฅ , ๐ก), see Theorem 3.2) as follows:
๐ข =๐ข๐โ๐พ(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (๐ฅโ๐ฅ๐) (3.10)
=๐ข๐โ๐ (๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)โ1๐ต(๐ฅ , ๐ก)โค๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (๐ฅโ๐ฅ๐)
where ๐ (๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) โป 0 is a weight matrix on the input๐ข and๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) โป 0 is a positive definite matrix (which satisfies the matrix inequality constraints for a contraction metric, to be given in Theorem 3.1). As discussed in Sec. 3.1.III, the extended linear form of the tracking control (3.10) enables LTV systems-type ap- proaches to Lyapunov function construction, while being general enough to capture the nonlinearity of the underlying dynamics due to Lemma 3.2 [36].
Lemma 3.2. Consider a general feedback controller๐ขdefined as๐ข= ๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) with๐(๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก) =๐ข๐, where๐ :R๐รR๐รR๐ รRโฅ0โฆโ R๐. If๐ is piecewise continuously differentiable, then โ๐พ : R๐ ร R๐ ร R๐ ร Rโฅ0 โฆโ R๐ร๐ s.t. ๐ข = ๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) =๐ข๐โ๐พ(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) (๐ฅโ๐ฅ๐).
Proof. Using๐(๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก) =๐ข๐,๐ขcan be decomposed as๐ข=๐ข๐+(๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก)โ
๐(๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก)). Since we have๐(๐ฅ , ๐ฅ๐, ๐ข๐, ๐ก) โ๐(๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก) =โซ1
0 (๐ ๐(๐๐ฅ+ (1โ ๐)๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก)/๐๐)๐๐, selecting๐พas
๐พ =โ
โซ 1 0
๐ ๐
๐ ๐ฅ
(๐๐ฅ+ (1โ๐)๐ฅ๐, ๐ฅ๐, ๐ข๐, ๐ก)๐๐ gives the desired relation [36].