FREQUENCY RESPONSE AND MEAN-SQUARED STABILITY
4.5 The Constant Input: Fixed-Point Iterations
In this section we will consider the random asynchronous model with a constant input, that is, the input signal will be assumed to be in the form of uπ =u for all π. Although this is a special case of what we studied in the previous sections, the constant input case gives more perspective from the viewpoint of fixed-point iterations. It allows us to relate these results to some known classical results for asynchronous updates [32, 14, 19, 20]. It also reveals connections to eigenspace estimation, singular vector estimation, and principal component analysis in the random asynchronous context.
We start by noting that when the input is constant, it suffices to consider the case of π =1 with πΌ=1 in the model (4.6) since a linear combination of constant inputs can be equivalently considered as a single constant input. In particular, we will consider the following type of updates:
(xπ+1)π=


ο£²

ο£³
(A xπ)π+ (B+wπ)π, π β Tπ+1, (xπ)π, π βTπ+1,
(4.67) where the noise termwπ follows the statistics in (4.3) as before.
Regarding the expected value of the state variables in (4.67), Theorem 4.1 gives the following
xssπ = IβAΒ―-1
BΒ― = IβA-1
B=xβ , (4.68)
which shows that the steady-state component xssπ depends neither on the update probabilities nor on the iteration index π. In fact, xss
π =xβ corresponds to the fixed-pointof the asynchronous iterations in (4.67), i.e.,xβ =A xβ +B.
Although the fixed-point is determined solely by the pair(A,B), the convergence of the random vectorxπto the fixed-pointxβ is still affected by the update probabilitiesP (as well as the matrixA). In particular, Corollary 4.2 shows that the condition (4.41), i.e., stability of the matrixπ½, is both necessary and sufficient for the convergence of xπ toxβ in the mean-squared sense when no noise is present. When there is noise, the error correlation matrix converges toQnas given in (4.43).
Additionally, in the case of a constant input it is clear that the vector defined in (4.36) becomesΞ΄π =0. So, we can selectπ«=0 in (4.53), and Corollary 4.3 shows that wheneverAHP AβΊ Pis satisfied, we have the following:
πββlim E
kxπ βxβ k2
2
β€ tr(Pπͺ)
πmin(PβAHP A), (4.69) where limit supremum from (4.52) is replaced with limit since the error correlation matrix indeed converges toQn. We note that the bound in (4.69) was first presented in Corollary 3.1 together with a lower bound on the limit of the error term.
4.5.1 Comparison with the Classical Results
Asynchronous (non-random) fixed-point iterations are well studied problems in the literature. Theoretical analysis of the linear case can be traced back to the studies in [32, 14], which assume that only one index is updated per iteration and allow the use of the past values of the iterant. The study [32] showed that the following condition is both necessary and sufficient for the convergence of the asynchronous updates:
π(|A|) <1, (4.70)
where |A| is the matrix obtained by replacing the elements of Aby their absolute values. In the non-random setting considered in [32, 14], the condition (4.70) is necessary in the sense that when (4.70) is violated there exists an index sequence for which asynchronous iterations do not converge.
It can be shown that the condition (4.70) is more restrictive than the stability of the matrix A. It is even more restrictive than the sufficient condition given by Corollary 4.3. That is, the following holds true (Lemma 3.1):
π(|A|) < 1 =β βP s.t. AHP AβΊ P. (4.71)
So, if A is unstable, there exists an index sequence for which iterations do not converge. On the other hand, Corollary 4.2 shows that the convergence may be achieved even whenAis unstable. Although these results appear to be contradic- tory, the difference is the notion of convergence: the condition(4.70) is necessary and sufficient for the convergence of any index sequence (as in sure convergence), whereas the condition(4.41)is necessary and sufficient for the mean-squared con- vergence. In addition, the result in [42, Corollary 3.46] implies that the condition (4.41) issufficientfor almost sure convergence as well.
As a result, we conclude that the asynchronous case is more restrictive than the synchronous case when the worst case behavior is considered. On the other hand, the asynchronous case may be less restrictive than the synchronous case when the statistical behavior is considered.
4.5.2 The Case of Zero Input
The asynchronous model (4.13) shows an interesting behavior when the input signal is identically zero, i.e., uπ =0for all π, which can be equivalently represented as takingB=0. In this case the state recursions reduce to the following form:
(xπ+1)π=


ο£²

ο£³
(A xπ)π+ (wπ)π, π β Tπ+1, (xπ)π, π βTπ+1,
(4.72) and the fixed-point becomes xβ =0. So, under the stability condition (4.41) the state variables converge to zero in the mean-squared sense (or, reach an error floor determined byQn).
It is important to note that the existence of the fixed-pointxβ in (4.68) requiresA not to have an eigenvalue equal to 1 so thatIβA is invertible. This requirement is satisfied implicitly by the stability condition (4.41) sinceAhaving an eigenvalue equal to 1 impliesπ(A) β₯Β― 1, thusπ(π½) β₯ 1.
When the matrix A has an eigenvalue equal to1, there are infinitely many fixed- points (as opposed to the unique one in (4.68)), and they correspond to the eigenspace ofAwith the eigenvalue 1. Nevertheless, recursions in (4.72) can still be stable, and the random vectorxπ can convergence to a point in the eigenspace (an eigenvector).
This convergence behavior is studied in Chapter 2, where random asynchronous re- cursions are used for obtaining spectral clustering in autonomous networks. Further- more, Section 2.8 used the model (4.72) for distributed asynchronous computation of dominant singular vectors of a given data matrix.