IIR FILTERING ON GRAPHS WITH RANDOM NODE-ASYNCHRONOUS UPDATES
3.5 Numerical Simulations
3.5.1 An Example of a Rational Graph Filter
In this section we will consider a rational filter (3.27) constructed with the following polynomials of order𝐿 =3:
𝑝(𝑥)= (1−𝛾 𝑥)3, 𝑞(𝑥) =1+ Õ3
𝑛=1
𝛾𝑛𝑥𝑛, 𝛾 =0.055, (3.67) where the value of𝛾is selected in such a way that it normalizes the spectrum ofG, that is,|𝛾| kGk2< 1 is satisfied.
The frequency response of the filter in (3.67) on the graph is visualized in Figure 3.3a, which shows that the filter has low-pass characteristics on the graph. When compared with the input signal u, the filtered signaleu has a lower amount of projection on the eigenvectors with larger eigenvalues as shown in Figure 3.3b. Sinceeu mainly contains low frequency components (eigenvectors with small eigenvalues [160]),eu is smoother on the graph as visualized in Figure 3.2b.
0 2 4 6 8 10 12 14 16 18
λi 0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
h(λi)
γ= 0.055 γ= 0.065
Polynomialin(3.72)
(a)
0 2 4 6 8 10 12 14 16 18
λi
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
Graph Fourier Coefficents
|viHu|
|viHue|
(b)
Figure 3.3: (a) Response of the rational filter ℎ(𝜆) constructed with (3.67). (b) Magnitude of the graph Fourier transforms of u andeu where (𝜆𝑖,v𝑖) denotes an eigenpair ofG.
We now consider the implementation of the filter (3.67) using Algorithm 5. In this regard, we first construct the direct form implementation of the corresponding digital filter as in (3.36):
Ab=
0 1 0
0 0 1
-𝛾3 -𝛾2 -𝛾
, bb=
0 0 1
, bcT =
-2𝛾3
2𝛾2 -4𝛾
, 𝑑b=1. (3.68)
It is readily verified that the matrixAb in (3.68) has𝐿 =3 distinct eigenvalues that are given as {-𝛾 , 𝑗 𝛾 , -𝑗 𝛾}. Thus, the similarity transform T can be selected as the eigenvectors ofbAas in (3.56), which corresponds to the Vandermonde matrix constructed with {-𝛾 , 𝑗 𝛾 , -𝑗 𝛾}. As a result, the corresponding realization of the filter according to (3.35) is given as follows:
A=𝛾
-1 0 0
0 𝑗 0
0 0 -𝑗
, b= -1
4𝛾2
-2 1+𝑗
1-𝑗
, cT=𝛾3
-8 2+2𝑗
2-2𝑗
. (3.69)
Since kAk2 =𝜌(A) = |𝛾|, we note that (3.64) is satisfied for the value of 𝛾 in (3.67), thus Algorithm 5 converges in the mean-squared sense when no input noise is present, and when there is noise, it reaches an error floor upper bounded as in (3.65).
In the first set of simulations of Algorithm 5 we consider the case of 𝜇=1, i.e., only one randomly selected node is updated per iteration. In order to verify the
convergence numerically, we simulated independent runs of Algorithm 5 with the filter realization in (3.69) and computed the mean-squared error by averaging over 104 independent runs. In order to present the effect of the measurement noise, we consider the case of 𝜎2 =10-16 as well as the noise-free case. Figure 3.4a presents the corresponding mean-squared errors together with the error in the noise- free case for 100 different realizations. Due to the random selection of the nodes, the residual itself is a random quantity, which does not decrease monotonically as seen in Figure 3.4a. Nevertheless, theexpectationof the error norm decreases monotonically until it reaches the error floor. We note that the error floor in the noise-free case corresponds to the numerical precision of the numerical environment (MATLAB).
In order to present the effect of the noise variance on the error floor, we run Algorithm 5 for different values of 𝜎2 for 𝑘
max =4·104 iterations (which ensures that the algorithm reaches an error floor as seen in Figure 3.4a) while selecting only 𝜇=1 node per iteration. The error floor corresponding to different values of 𝜎2 together with the upper bound in (3.65) are presented in Figure 3.4b. In addition to the upper bound (3.65) scaling linearly with the noise variance, Figure 3.4b shows that the error floor itself scales almost linearly with the noise variance as well.
(a)
10-3210-2810-2410-2010-1610-12 10-8 10-4 100
σ2 10-28
10-24 10-20 10-16 10-12 10-8 10-4 100 104
Error Floor
Upper Boundin(3.65)
(b)
Figure 3.4: (a) Squared error norm in 100 different independent realizations together with the mean squared error of Algorithm 5 with the implementation in (3.69). (b) Error floor of the algorithm as a function of the input noise together with the bound in (3.65).
We note that the filter realization in (3.69) ensures the convergence of the algorithm irrespective of the value of𝜇. However, theconvergence rateof the algorithm does depend on the value of𝜇in general. This point will be demonstrated in the following
set of simulations, in which we use the filter realization in (3.69) and set the noise variance as 𝜎2 =10-16. In order to obtain a fair comparison between different values of 𝜇, we fix the total number of updates to be 25000, so the algorithm gets d25000/𝜇eiterations. We run the algorithm independently 105times for each value of𝜇={1,· · · , 𝑁}and present the corresponding mean-squared errors as a function of the number of updated nodes in Figure 3.5.
0.5 1 1.5 2 2.5
Total number of updated nodes (kµ) ×104 1
10 20 30 40 50 60 70 80 90 100 110 120 130 140 Numberofupdatesperiteration(µ) 150
10-12 10-10 10-8 10-6 10-4 10-2 100 102
Figure 3.5: The mean squared error of Algorithm 5 when more than one node is updated simultaneously with noise variance𝜎2=10-16. The first row in the figure corresponds to Figure 3.4a.
We first point out that Figure 3.5 verifies the convergence of the algorithm for all possible values of 𝜇. More interestingly, the figure shows also that the algorithm gets faster as it gets more asynchronous (small 𝜇). Equivalently, for a given fixed amount of computational budget (total number of nodes to be updated), having nodes updated randomly and asynchronously results in a smaller error than having synchronous updates. However, it is important to emphasize that the behavior shown in Figure 3.5 is not typical for the algorithm; rather, it depends on the underlying filter. Indeed we will find a similar behavior in Section 3.5.3, but an opposite behavior later in Section 3.5.4. We also note that for the case of zero- input, Section 2.3.3 of this thesis theoretically discussed the conditions under which randomized asynchronicity results in a faster convergence.