FREQUENCY RESPONSE AND MEAN-SQUARED STABILITY
4.6 Further Discussions
50 100 150 200 250
k
-1 -0.5 0 0.5 1
I m{uk} Re{uk}
50 100 150 200 250
k
0 1 2 3
tr(Qk) tr(Qrk)
(a)
50 100 150 200 250
k
-1 -0.5 0 0.5 1
I m{uk} Re{uk}
50 100 150 200 250
k
0 1 2 3
tr(Qk) tr(Qrk)
(b)
Figure 4.3: Comparison between inputs (a)π’π = ππ π π, and (b)π’π =cos(π π) with frequencyπ =2π/100. The first row shows the input signal, and the second row shows the trace of the error correlation matrix, which is obtained by averaging over 106independent runs.
simplicity, in the rest of this section we assume that the input signal isπ’π =ππ π π. So, we have kxssπk = k (ππ πIβAΒ―)-1BΒ―k, and the randomization error is given in (4.49).
Thus, SRRπ remains constant as a function of the iterations. However, the value of SRRπ depends on the input frequency π as well as the update probabilities P. In order to demonstrate this point, we compute SRR numerically for the system in (4.27) for different values ofπand π. These results are presented in Figure 4.4.
Generally speaking, SRR tends to be larger as the input signal varies more or the state variables are updated less frequently.
We first note that the matrixAgiven in (4.27) hasπ(A) > 1. However, for the case ofP= πI, we have numerically verified that the stability condition (4.41) is satisfied for 0< π β€ 0.9542. This is why the case of π > 0.9542 is excluded in Figure 4.4.
Heuristically speaking, Figure 4.4 shows that state variables in the randomized case are trying to βkeep up with the input signal.β As the input signal varies faster (larger values ofπ), or the state variables are updated slower (small values ofπ), the randomization error tends to be larger. However, SRRπisnotmonotonic in terms of the input frequencyπ and the update probabilitiesPin general. This complicated behavior follows from the fact that signal power and randomization error change
Figure 4.4: SRR of the example in (4.27) for different values of the input frequency πandP= πI. The plot is in dB scale, i.e., 10 log10(SRR)is plotted. Dotted black curve corresponds to the case of 0 dB.
simultaneously with the input frequency and the update probabilities.
4.6.3 The Distribution of the Random Variables
Due to random selection of the indices, the state vector xπ in the asynchronous model (4.13) is a discrete random vector with at most 2π π distinct values, as there are 2π different ways of selecting an update set in any iteration. We note that the first and the second order statistics ofxπ are described previously in Theorem 4.1 and Corollary 4.2, respectively. In this section, we will consider the distribution of the random vectorxπ (as well as the outputyπ).
One can consider approximatingxπ with a multivariate complex Gaussian random vector with mean xssπ and covariance Qπ due to the independent selection of the indices and the central limit theorem. Contrary to the anticipation, we have numer- ically observed thatthe random vectorxπ does not have a Gaussian distributionin general as we demonstrate next.
In this regard, we consider the system in (4.27) with the inputπ’π =cos(2π π/100), and we takeP= πIandπͺ=0. Since the output is a scalar real random variable in this case, we will focus onyπ for the sake of simplicity. In particular, we present the empirical distribution ofyπ for several different values ofπ for the probabilities π ={0.3, 0.6, 0.9}in Figure 4.5.
-14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 yk
0 0.10 0.20 0.30 0.40 0.50
y100
y120
y130
y140
y150
(a)
-14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 yk
0 0.25 0.50 0.75 1.00 1.25 1.50
y100
y120
y130
y140
y150
(b)
-14 -12 -10 -8 -6 -4 -2 0 2 4 6 8 10 12 14 yk
0 1.00 2.00 3.00 4.00
y100
y120
y130
y140
y150
(c)
Figure 4.5: Empirical distributions of the output random variable for the update probabilities (a) π =0.3, (b) π =0.6, (c) π=0.9. Dotted lines denote the mean of the corresponding random variables. Distributions are obtained via 109independent samples of the random variables.
Numerical results in Figure 4.5 show that the distributions are not necessarily symmetric with respect to their means, and they can be multimodal. So, it is clear that the random variableyπ does not have a Gaussian distribution in general.
In addition to the mean and the variance being a function of the iteration index π (see Theorem 4.1 and Corollary 4.2), Figure 4.5 shows that overall βshapeβ
of the distributions also change with iterations. In particular, Figures 4.5b and 4.5c show that the distributions can be multimodal or unimodal depending on π. Similarly, update probabilities also affect the distributions. As discussed previously in Section 4.6.2, distributions tend to be βnarrowerβ as the update probabilities get larger. However, the precise relationship between the update probabilities and the distributions is not know at this point.
4.6.4 Effect of the Stochastic Model
Although linear systems show an unexpected behavior under randomized asyn- chronicity, precise details of the stochastic model are also important in determining the mean-squared stability. The results presented in this chapter are valid only for the stochastic index selection model described in Section 4.2.1, and the stability con- dition can be different under different models. In this section we will demonstrate this point.
We start by considering the model in Section 4.2.1 with all the indices being updated with probability π/π, so that π uniformly randomly selected indices are updated per iteration on average, that is,E[|Tπ|] = πfor allπ. More precisely,
P= π π
I =β AΒ― =I+ π π
AβI
. (4.76)
In this case, the matrixπ½in (4.38) has the following form:
π½=AΒ―ββAΒ― + π(πβ π) π2 J
(AββI) β (AβI)
. (4.77)
We now consider a slightly different model where we update exactly π indices per iteration. So, we have |Tπ| =π for all π, and the set Tπ is selected uniformly randomly among all possible ππ
subsets of size π. In this case the average state transition matrix Β―Astill has the form in (4.76). However, it can be shown (using the identity (2.2)) that the error correlation matrix evolves according to the following function:
π0(X) =A XΒ― AΒ―Hβ π(πβπ)
π2(πβ1) (AβI)X(AH βI) + π(πβ π)
π(πβ1)
(AβI)X(AH βI)
I. (4.78)
By vectorizing both sides of (4.78), matrix representation of the functionπ0(Β·)can be found as follows:
π½0=AΒ―β βAΒ― + π(πβ π) π(πβ1)
Jβ 1
π
I (AββI) β (AβI)
. (4.79)
It is clear thatπ½andπ½0are different from each other although the differenceπ½βπ½0 approaches zero asπgets larger. More importantly, there is no clear relation between π(π½) and π(π½0). Thus, random asynchronous updates may be stable under one stochastic model, but the iterations may get unstable under the other model although both models update πuniformly selected random indices per iteration on average.