• Tidak ada hasil yang ditemukan

Signals on Networks: Random Asynchronous and Multirate Processing, and Uncertainty Principles

N/A
N/A
Protected

Academic year: 2023

Membagikan "Signals on Networks: Random Asynchronous and Multirate Processing, and Uncertainty Principles"

Copied!
351
0
0

Teks penuh

The thesis presents applications of the random asynchronous model in the context of graph signal processing, including an autonomous clustering of network of agents, and a node-asynchronous communication protocol that implements a given rational filter on the graph. The second part of the thesis focuses on extensions of the following topics in classical signal processing to the case of graph: multirate processing and filter banks, discrete.

LIST OF ILLUSTRATIONS

The first row shows the input signal, and the second row shows the trace of the error correlation matrix, which is obtained by means of 106 independent runs. 295 8.3 (a) Comparison of the maximum energy density that occurs in class-. b) Magnitude response of the optimal filter for 𝐿 = 3 for different values ​​of 𝜎.

INTRODUCTION

Notations

  • Norms
  • Operations and Products

We will use and to denote the positive-definite (PD) and the positive semi-definite (PSD) order, respectively. Given two matrices X∈C𝑁×𝑀 and Y∈C𝑁×𝑀, we will use X Y to denote the Hadamard product of the matrices.

A Brief Review of Graph Signal Processing

  • Basic Graph Definitions
  • Quadratic Variation
  • Notion of Frequency
  • Eigenvectors as a Fourier Basis on the Graph
  • Notion of Filtering Over Graphs
  • Polynomial Filters and the Graph Shift
  • An Asynchronous Graph Shift

We consider that the vector1 of all units belongs to the null space of the Laplacian graph, i.e. L 1=0. Moreover, the square variation of the eigenvector of the Laplacian graph is equivalent to the corresponding eigenvalue of the eigenvector.

Figure 1.1: Visualization of some of the eigenvectors of the graph Laplacian on the graph
Figure 1.1: Visualization of some of the eigenvectors of the graph Laplacian on the graph

Outline and Scope of the Thesis

  • Random Node-Asynchronous Updates (Chapter 2)
  • IIR Filtering with Random Node-Asynchronous Updates (Chapter 3) This chapter proposes a node-asynchronous implementation of rational filters on
  • Random Asynchronous Linear Systems (Chapter 4)
  • Randomized Algorithms as Switching Systems (Chapter 5)
  • Extending Classical Multirate Signal Processing Theory (Chapter 6) This chapter extends classical multirate signal processing ideas to graphs and revisits
  • Uncertainty Principles and Sparse Eigenvectors (Chapter 7)
  • Energy Compaction Filters (Chapter 8)
  • Time Estimation for Heat Diffusion (Chapter 9)

This chapter also presents a necessary and sufficient condition for the mean-square stability of a randomized system. When the eigenvalues ​​of the graph have a uniform distribution, the graph-dependent formulation is shown to be asymptotically equivalent to the graph-free formulation.

RANDOM NODE-ASYNCHRONOUS UPDATES ON GRAPHS

Introduction

  • Assumptions on the Graph Operator
  • Connections with Asynchronous Fixed Point Iterations
  • Connections with Gossip Algorithms and Consensus
  • Outline of the Chapter and Contributions
  • Preliminaries and Notation

In Section 2.4, we show how the geometry of the operator eigenspace plays a role in the convergence of random asynchronous updates. In Section 2.6 we consider graph operator polynomials to achieve convergence of the iteration to the other eigenvectors (corresponding to the non-uniform eigenvalues) (Theorem 2.4).

Asynchronous Power Iteration

  • Randomized Asynchronous Updates

The number of nodes to update, T= |T |, is a discrete random variable whose distribution will be shown to determine the behavior of asynchronous updates. For this reason, 𝛿T will be denoted as the amount of repetition asynchrony in the rest of the chapter.

Cascade of Asynchronous Updates

  • Expected Amount of Projection onto the Eigenvectors
  • Convergence in Mean-Squared Sense
  • Rate of Convergence

Thus, the sign (or phase in the complex case) of the eigenvalues ​​is not important. Unlike the regular power method, the signs of the eigenvalues ​​matter in random component updates.

Figure 2.1: Some realizations of the trajectories of the signal through updates for (a) the non-random synchronous case, (a) the random asynchronous case.
Figure 2.1: Some realizations of the trajectories of the signal through updates for (a) the non-random synchronous case, (a) the random asynchronous case.

The Importance of the Eigenspace Geometry

2-norm of the residual as a function of the iteration index and the limits given by theorem 2.2. Simulations show that the bounds suggested by Theorem 2.2 are out of order in this specific example.

Figure 2.5 visualizes the expected squared ℓ
Figure 2.5 visualizes the expected squared ℓ

Interpretation in the Context of Graph Signal Processing

In the following, we will consider the effect of a single asynchronous update on the smoothness of the signal. Some updates may be conflicting, increasing the variability of the signal across the graph.

Asynchronous Polynomial Filters on Graphs

  • The Optimal Polynomial
  • Sufficiency of Second Order Polynomials
  • Spectrum-Blind Construction of Suboptimal Polynomials
  • Inexact Spectral Information and The Use of Nonlinearity
  • Implementation of Second Order Polynomials

In this case, the problem reduces to the consensus problem, and the condition in (2.72) becomes a relaxed version of the polynomial considered in [150]. In the case of repeated eigenvalues, we assume that the repeated rows of the matrix 𝚽 are removed.

Figure 2.6: Visualization of the saturation nonlinearity defined in (2.87).
Figure 2.6: Visualization of the saturation nonlinearity defined in (2.87).

An Application: Autonomous Clustering

In the simulations, all nodes are initialized randomly, and random asynchronous iterations are performed on the constructed Laplacian polynomials. This is an interesting consequence of the fact that the filter in (2.84) produces a larger spectral gap than that in (2.75) for the amount of spectral information used.

Figure 2.7: (a) A graph on 𝑁 = 100 nodes with 2 clusters. The graph has undirected edges with binary weights
Figure 2.7: (a) A graph on 𝑁 = 100 nodes with 2 clusters. The graph has undirected edges with binary weights

An Application: Randomized Computation of Singular Vectors

  • Rank- 𝑟 Approximation of an Arbitrary Matrix
  • Distributed Implementation with Partial Data Storage

The following theorem shows that Algorithm 2 indeed converges to the left and right dominant singular vectors of matrix A. These results are presented in Figure 2.9a for the case of 𝑟, which numerically verifies the convergence of the algorithm.

Figure 2.9: (a) Convergence of Algorithm 3 for various different values of 𝑟 . (b) Convergence of Algorithm 3 for the case 𝑟 = 3 when the normalization step in Line 6 is executed with probability 𝛾
Figure 2.9: (a) Convergence of Algorithm 3 for various different values of 𝑟 . (b) Convergence of Algorithm 3 for the case 𝑟 = 3 when the normalization step in Line 6 is executed with probability 𝛾

Concluding Remarks

It is clear from the figure that even when the shared memory (the variable U) is updated with probability as low as 𝛾 =10−3≈1/𝑀, the modified implementation continues to converge as fast as Algorithm 4 itself, giving the indicating robustness. of the algorithm for using outdated data. The convergence of the extended algorithm is verified numerically for several different numbers of data partitions.

Appendices

  • A Useful Inequality
  • Proof of Theorem 2.2
  • A Counter-Example

This can also be seen from the fact that the problem in (2.113) is in the form of the double norm formulation of theℓ. To calculate the expectation in (2.124), in the following we will first find the conditional expectation (conditioned on T) as.

IIR FILTERING ON GRAPHS WITH RANDOM NODE-ASYNCHRONOUS UPDATES

Introduction

  • Relations with Asynchronous Fixed Point Iterations
  • Outline and Contributions of This Chapter

In this chapter, the analysis of the algorithm will be based on the convergence properties of randomized asynchronous linear fixed-point iterations (state recursions). Next, in Section 3.4, we prove the convergence of the proposed algorithm to the desired filtered signal in the mean-square sense for both synchronous and asynchronous cases (Theorems 3.2 and 3.3).

Asynchronous State Recursions

  • Random Selection of the Update Sets
  • Convergence in the Mean-Squared Sense
  • On The Schur Diagonal Stability
  • The Case of Uniform Probabilities

1) Update probabilities and convergence: The convergence of the iterations depends on the matrix A as well as the average index selection matrix P. The study [32] ensures the convergence of the iterations for any index sequence, whereas Corollary 3.1 considers the convergence in a mean-square sense.

Asynchronous Rational Filters on Graphs

  • Rational Graph Filters
  • Node-Asynchronous Implementation
  • Implementation Details

In the proposed algorithm, we emphasize that a node entering the active stage is independent of the values ​​in its buffer. Since nodes are assumed to store the most recent data of its incoming neighbors in the proposed form of the algorithm, the node𝑖 needs a buffer of size𝐿· |Nin(𝑖).

Figure 3.1: Visual illustration of the proposed asynchronous implementation of a given graph filter
Figure 3.1: Visual illustration of the proposed asynchronous implementation of a given graph filter

Convergence of the Proposed Algorithm For convenience of analysis we defineFor convenience of analysis we define

  • Selection of the Similarity Transform
  • The Case of Polynomial Filters

The equivalent model of the algorithm in (3.43) is of the form (3.12), so the results presented in Section 3.2 (especially Corollary 3.1) can be used to study the convergence of the algorithm. In the case of no noise (𝚪=0), the right-hand side of (3.52) becomes zero, and the condition (3.51) ensures the convergence of the output signal to the desired filtered signal in the mean-square sense.

Numerical Simulations

  • An Example of a Rational Graph Filter
  • Updates with Failing Broadcasts
  • A Case of Convergence Only with Asynchronous Iterations
  • An Example of a Polynomial Graph Filter

We note that the filter realization in (3.69) guarantees the convergence of the algorithm regardless of the value of 𝜇. We first note that Figure 3.5 verifies the convergence of the algorithm for all possible values ​​of 𝜇.

Graph Fourier Coefficents
Graph Fourier Coefficents

Conclusions

We note that the convergence behavior of the algorithm with the polynomial filter in (3.73) differs from that of the filter considered in section 3.5.1. The simulation results showed that the presented sufficient condition is not necessary for the convergence of the algorithm.

Figure 3.8: The mean-squared error of Algorithm 5 for the case of the polynomial (FIR) filter described in (3.72) with the input noise variance 𝜎 2 = 10 -16 .
Figure 3.8: The mean-squared error of Algorithm 5 for the case of the polynomial (FIR) filter described in (3.72) with the input noise variance 𝜎 2 = 10 -16 .

Appendices

  • Proof of Theorem 3.1
  • Proof of Lemma 3.1
  • Proof of Theorem 3.2
  • Proof of Theorem 3.3

In addition, the extended noise covariance matrix �𝚪 can be written explicitly from (3.44) as follows: 3.96) We also note that the impulse response coefficients of the underlying digital filter are given as follows [200, Eq. Since both sides of (3.102) are block diagonal matrices, the condition (3.102) holds if and only if all individual blocks satisfy the inequality, that is:

RANDOM ASYNCHRONOUS LINEAR SYSTEMS

FREQUENCY RESPONSE AND MEAN-SQUARED STABILITY

Introduction

  • Contributions of the Chapter
  • Outline

Section 4.3.1 describes how the second order statistics of the state vector evolve (Theorem 4.2), and Section 4.3.2 provides the necessary and sufficient condition for the mean square stability of the randomized recursions (Corollary 4.2). Such stability is essential to make the “frequency response in the mean” usable in practice.

Asynchronous Linear Systems with Exponential Inputs

  • Random Selection of the Update Sets
  • Frequency Response in the Mean
  • A Running Numerical Example

The random updates of the state variables make it clear from (4.13) that the state vector x𝑘 is a random vector. In fact, the response of the random asynchronous updates performed on a system identified by (A,B,C,D) can be represented in an average sense as the response of a synchronous system (A¯,B¯,C , D).

Second Order Stability of the State Variables

  • Evolution of the Error Correlation Matrix We start by consider the following matrix valued function
  • The Necessary and Sufficient Condition
  • Markov Jump Linear System Viewpoint
  • A Sufficient Condition
  • Stability in Synchronous v.s. Asynchronous Case
  • The Set of Probabilities Ensuring Stability
  • Importance of the Similarity Transform

Based on this result, we will consider the necessary and sufficient condition for the boundedness of the error correlation matrix (Corollary 4.2). Thus, the effect of the input noise remains the same throughout the iterations (which is not the case for Qr𝑘).

Figure 4.1: A realization of the output signal with the state-space model in (4.27), the frequency in (4.28), and the probabilities (a) 𝑝 = 0
Figure 4.1: A realization of the output signal with the state-space model in (4.27), the frequency in (4.28), and the probabilities (a) 𝑝 = 0

The Constant Input: Fixed-Point Iterations

  • Comparison with the Classical Results
  • The Case of Zero Input

In particular, Corollary 4.2 shows that the condition (4.41), i.e. stability of the matrix𝚽, is both necessary and sufficient for the convergence of x𝑘tox★ in the mean-square sense when no noise is present. It can be shown that the condition (4.70) is more restrictive than the stability of matrix A.

Further Discussions

  • Real Sinusoidal v.s. Complex Exponential Input
  • Signal-to-Randomization Error Ratio (SRR)
  • The Distribution of the Random Variables
  • Effect of the Stochastic Model

In this section we will consider the distribution of the random vector x𝑘 (as well as the outputy𝑘). In addition to the mean and variance being a function of the iteration index 𝑘 (see Theorem 4.1 and Corollary 4.2), Figure 4.5 shows that overall 'shape'.

Figure 4.3: Comparison between inputs (a) 𝑢 𝑘 = 𝑒 𝑗 𝜔 𝑘 , and (b) 𝑢 𝑘 = cos ( 𝜔 𝑘 ) with frequency 𝜔 = 2 𝜋 / 100
Figure 4.3: Comparison between inputs (a) 𝑢 𝑘 = 𝑒 𝑗 𝜔 𝑘 , and (b) 𝑢 𝑘 = cos ( 𝜔 𝑘 ) with frequency 𝜔 = 2 𝜋 / 100

Concluding Remarks

Appendices

  • A Result on The Random Index Selections
  • Proof of Theorem 4.1
  • Proof of Theorem 4.2
  • Proof of Corollary 4.2 We first define the following
  • Proof of Lemma 4.1
  • Proof of Lemma 4.2
  • Proof of Corollary 4.3
  • Proof of Lemma 4.5

We now take the expectation of both sides of (4.92) and apply the identities given by Lemma 4.6 and the independence assumption regarding the choices of index, input noise, and initial condition. Then we first note that ¯A and 𝚽 are triangular matrices, which simply follows from the fact that P and J are diagonal matrices and the Kronecker product of triangular matrices is a triangular matrix.

RANDOMIZED ALGORITHMS AS SWITCHING SYSTEMS

Introduction

  • Scope and Outline

In Section 5.2 we will briefly review discrete-time switching systems by showing their expected behavior (Lemma 5.1), mean-square stability (Lemma 5.2) and alternative characterizations for the stability (Lemma 5.3). In these sections, we will also provide alternative proofs for the convergence of the randomized Kaczmarz (Corollary 5.1) and random Gauss-Seidel (Corollary 5.2) methods.

Discrete-Time Randomly Switching Systems

From Lemma 5.1 it becomes clear that stability of the mean state transition matrix ¯A is both necessary and sufficient for the expectation of x𝑘 to converge to ¯x. This is because the fixed point of the average system xi is not necessarily the fixed point for all individual systems.

The error correlation matrix converges to R, i.e.,

  • Non-Random Case
  • Randomized Kaczmarz and Gauss-Seidel Algorithms
    • Randomized Gauss-Seidel Algorithm
    • Randomized Asynchronous Fixed-Point Iterations
  • Concluding Remarks
  • Appendices
    • Proof of Lemma 5.2

In fact, the convergence of the randomized Kaczmarz algorithm can be guaranteed for any set of probabilities. In fact, the convergence of the randomized Gauss-Seidel algorithm can be guaranteed for any set of probabilities.

EXTENDING CLASSICAL MULTIRATE SIGNAL PROCESSING THEORY TO GRAPHS

Introduction

  • Scope and Outline of This Chapter
  • Notation
  • Review of DSP on Graphs

In section 6.12 we develop graph filter banks on cyclic graphs with 𝑀 blocks (defined and studied in section 6.5). In section 6.14 we consider the frequency responses of graph filter banks, inspired by similar ideas from [153, 160].

Building Blocks for Multirate Processing on Graphs .1 Downsampling and Upsampling Operations.1Downsampling and Upsampling Operations

  • The Noble Identities

Such a definition provides an extension of the results presented in the following sections and allows us to remove some of the restrictions on the adjacency matrix. Since the decimated signal has length 𝑁/𝑀, we need to define a different shift operator for the decimated signal.

  • Lazy Graph Filter Banks
  • Polyphase Implementation of Decimation and Interpolation Filters A useful tool in multirate signal processing is the polyphase representation of linear
  • Graph Filter Banks and Polynomial Filters

An important theoretical example of a maximum decay filter bank in classical signal processing is the 𝑀 channel delay chain filter bank, also known as the lazy filter bank, shown in Figure 6.3a. In this subsection we consider the graph equivalent of the lazy filter bank shown in Figure 6.3b.

Figure 6.7: Implementation of the polynomial graph filter 𝐻 𝑘 (A) = Í 𝐿
Figure 6.7: Implementation of the polynomial graph filter 𝐻 𝑘 (A) = Í 𝐿

Gambar

Figure 1.1: Visualization of some of the eigenvectors of the graph Laplacian on the graph
Figure 1.2: For a given eigenvalue-eigenvector pair, (a) number of zero-crossings, (b) absolute difference based variation of eigenvectors of the graph in Figure 1.1.
Figure 1.3: Number of zero crossings of the eigenvectors of (a) the normalized graph Laplacian, (b) the adjacency matrix as a function of their corresponding eigenvalues of the graph in Figure 1.1.
Figure 1.4: (a) A graph signal x on the graph with 𝑁 = 150 nodes. Output of the ideal low-pass filtering with the cut-off (b) 𝑀 = 40, (c) 𝑀 = 5.
+7

Referensi

Dokumen terkait

Although some strategies are quite prevailing to help the good language learners handle their learning tasks, it is not automatically allow the teachers to copy the strategies