IIR FILTERING ON GRAPHS WITH RANDOM NODE-ASYNCHRONOUS UPDATES
3.3 Asynchronous Rational Filters on Graphs
3.3.3 Implementation Details
In this section, we will present the precise details of the proposed asynchronous update mechanism, which was outlined in the previously section. Then, we will present the proposed method formally in Algorithm 5.
We first consider the graph shift step. In order to incorporate the underlying graph structure into the filtering operation, the graph shift step updates the local state vector as the linear combination of the state vectors of the incoming neighbors. More precisely, when the ๐๐ก โ node is updating its state vector, the node does the following computation first:
x0๐ โ ร
๐โNin(๐)
๐บ๐, ๐ x๐, (3.33)
wherex0๐ โC๐ฟ denotes the โgraph shifted version" of the state vectorx๐. It is impor- tant to note that the computation in (3.33) can be done locally andasynchronously by the๐๐ก โ node, as the node is assumed to have all the state vectors of its incoming neighbors already available in its buffer.
In the filtering step, we use the graph shifted state vector x0๐ to carry out a state recursion corresponding to the underlying filter. In this regard, consider the scalar IIR digital filter,โ
d(๐ง), whose transfer function is given as follows:
โd(๐ง) = ๐(๐ง-1) /๐(๐ง-1) =
โ
ร
๐=0
โ๐ ๐ง-๐, (3.34)
where ๐(๐ง) and๐(๐ง)are as in (3.28), and โ๐โs correspond to the coefficients of the impulse response of the digital filter. Furthermore, we assume that the digital filter (3.34) has the following state-space description:
A=T-1 bA T, b=T-1bb, c=bc T, ๐ =๐ ,b (3.35) where the quadruple (bA,bb,
bc, ๐b) corresponds to the direct form description of the filter in (3.34) (see [200, Section 13.4]):
bA=
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ
0 1 0 ยท ยท ยท 0
0 0 1 ยท ยท ยท 0
.. .
.. .
.. .
... .. .
0 0 0 ยท ยท ยท 1
-๐๐ฟ -๐๐ฟ
-1 ยท ยท ยท -๐
1
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
, bb=
๏ฃฎ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฏ
๏ฃฐ 0 0 .. . 0 1
๏ฃน
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃบ
๏ฃป
, ๐b= ๐
0,
bc=[๐๐ฟ โ ๐
0๐๐ฟ ๐๐ฟ
-1โ ๐
0๐๐ฟ
-1 ยท ยท ยท ๐
1โ ๐
0๐
1], (3.36)
andT โC๐ฟร๐ฟ is an arbitraryinvertiblematrix.
Although the response of the filter in (3.34) does not depend on the particular selection of the matrixT, the convergence properties of the node-asynchronous im- plementation of the filter on the graph does depend on the similarity transformation.
We will elaborate on this in Section 3.4.1, but the optimal choice ofTis not known at this time.
Using the state-space description of the underlying filter in (3.35), the ๐๐ก โ node executes the following updates locally in the filtering step:
๐ฆ๐ โc x0๐+๐ (๐ข๐+๐ค๐),
x๐ โA x0๐+b (๐ข๐+๐ค๐), (3.37) where๐ค๐denotes the additive input noise measured by the node๐during an update.
We note that the value of ๐ข๐ remains the same, but the value of ๐ค๐ is different in each update due to it being random. If the nodes do not take measurements, one can easily assume that the measurements are noise-free and set๐ค๐ =0 so that the noise covariance is๐ช=0.
The random node-asynchronous implementation of the IIR graph filter (3.29) is summarized in Algorithm 5. In the next section we will prove that this algorithm is indeed a valid implementation of (3.29) under some conditions to be stated.
Algorithm 5Node-Asynchronous Rational Graph Filtering
1: procedureInitialization(๐)
2: Initialize the state vectorx๐ โC๐ฟ asx๐=0.
3: procedurePassive Stage(๐)
4: if x๐ is received from the node ๐ โ Nin(๐)then
5: Store the most recent value ofx๐.
6: procedureActive Stage(๐)
7: x0๐ โร
๐โNin(๐)๐บ๐, ๐ x๐. โฒgraph shift
8: ๐ฃ๐ โ๐ข๐+๐ค๐. โฒnoisy sample
9: ๐ฆ๐โ c x0๐+๐ ๐ฃ๐. โฒfiltering
10: x๐ โA x0๐+b๐ฃ๐. โฒfiltering
11: Broadcastx๐to all ๐ โ Nout(๐).
Except the initialization stage, which is executed only once, Algorithm 5 consists of two main states: passive and active, both of which are triggered asynchronously.
More precisely, the active stage is triggered randomly by a node-specific timer, or condition, and the passive stage is triggered when a node receives a state vector from
its incoming neighbors. It is also assumed that a node stores only the most recent received data.
When the node๐gets into the active stage at a random time instance, the node first computes its graph shifted state vectorx0๐in Line 7 using the values that are available in its buffer. Then, the node takes a noisy measurement of the underlying graph signal๐ข๐. When the filtering recursions in Lines 9 and 10 are completed, the node broadcasts its own local state vector to its outgoing neighbors, and gets back into the passive stage.
In the presented algorithm we emphasize that a node getting into the active stage is independent of the values in its buffer. In general, a node doesnot wait until it receives state vectors from all of its neighbors. In between two activations (i.e., in the passive stage), some values in the buffer may be updated more than once, and some may not be updated at all. Nodes use the most recent update only.
Since nodes are assumed to store the most recent data of its incoming neighbors in the presented form of the algorithm, the node๐requires a buffer of size๐ฟยท |Nin(๐) |. In fact, Algorithm 5 can be implemented in such a way that each node uses a buffer of size 2๐ฟ only. One can show that this is achieved when each node broadcasts the difference in its state vector rather than the state vector itself, and the variable x0๐ accumulates all the received differences in the passive stage. However, such an implementation may not be robust under communication failures, whereas the current form of the algorithm is shown to be robust to communication failures. (See Section 3.5.2.) Moreover, the current form of the algorithm is easier to model and analyze mathematically as we shall elaborate in the next section.
Due to its random asynchronous nature, Algorithm 5 appears similar to filtering over time varying graphs, which is studied extensively in [88]. However, random asynchronous communications differ from randomly varying graph topologies in two ways: 1) Expected value of the signal depends on the โexpected graph" in randomly varying graph topologies [88, Theorem 1], whereas the fixed point does not depend on the update probabilities in the case of asynchronous communications.
2) The graph signal converges in the mean-squared sense in the case of asynchronous communications (see Section 3.4), whereas the signal has a nonzero variance in the case of randomly varying graph topologies [88, Theorem 3].
We also note that the study in [74] proposed a similar algorithm, in which nodes retrieve and aggregate information from a subset of neighbors of fixed size selected
uniformly randomly. However, the computational stage of [74] consists of a linear mapping followed by a sigmoidal function, whereas Algorithm 5 uses a linear update model. More importantly, aggregations are done synchronously in [74], that is, all nodes are required to complete the necessary computations before proceeding to the next level of aggregation. On the contrary, nodes aggregate information repetitively andasynchronouslywithout waiting for each other in Algorithm 5.
3.4 Convergence of the Proposed Algorithm