• Tidak ada hasil yang ditemukan

Recursive covariance estimation and FOEP

The practice of using PCA for vibration based health monitoring is well documented in the literature [84–87]. In order to tailor basic PCA towards online damage detection, algorithms premised on FOEP are chosen that could update the covariance estimate at each time step, in a recursive fashion [88]. Towards this, it is imperative to understand the basic concepts of FOEP as a data driven method, for recursive covariance update of the sample data matrix. It should be noted that the method is purely data driven and does not make any apriori assumptions regarding the nature of the data, thereby finding applications for binary data sets, data evolving from chemical engineering processes [88] and even in structural dynamics problems [85]. The recursive estimation of the covariance matrix is presented next in detail.

FOEP is a way of expressing the eigen structure of k+ 1th step in terms of the eigen structure of kth step as k+ 1th data streams in. This is accomplished by expressing the EVD of symmetric positive definite covariance matrix in terms of rank one perturbation of eigenvalue and eigenvector matrices [132–136]. Consider the data matrix X0 ∈ <n1×m required to build an initial PCA model for a few datasets, required to stabilize the algorithm. Here n1 is the number of samples, expressed as rows andmis the number of variables, expressed as columns of the data matrix. The mean of each column is given by the vector: µ1 = n1

1(X0)TIn1, where In1 =

1 1 ... 1 T

∈ <n1 Under special circumstances, for structural systems, data generally evolve from zero mean processes. However, as the nature of the data is not pre-assumed for the deriving the recursive update expressions, the data is centered to zero by subtracting its mean and scaled to unit variance, according to:

X1 = X0 −In1µT1

Σ−11 where,Σ1 =diag(σ1.1, ..., σ1.m), whoseith element is the standard deviation of the ith measured variable (i = 1, ..., m). The basic covariance matrix can be obtained as: Ω1 = TH-1989_156104031

1

n1−1XT1X1. It can be expected that the new block of data will augment the data matrix and calculate the covariance matrix recursively, as and when a newer set of data streams in, that forms the basis of the FOEP approach. Considering the kth sample, it is assumed that µk, Xk and Ωk are calculated. The next step of the algorithm is to implement the updates for the (k+ 1)th sample point, recursively.

Considering, X0k+1 =

X0k X0n

k+1

T

, for all the k+ 1 sample points, the mean vector µk+1 is related to the mean vector at the kth sample point by the following expression:

µk+1 = Nk

Nk+1µk+ Ink+1 Nk+1

X0nk+1

(3.1)

where, the quantity Nk = k

P

i=1

ni

. The recursive calculation for the data matrix at the (k+ 1)th sample is given by (skipping detailed steps):

Xk+1 = X0k+1−Ik+1µTk+1 Σ−1k+1

=

X0k−IkµTk

Σ−1k ΣkΣ−1k+1−Ik∆µTk+1Σ−1k+1

Xn

k+1 −Ink+1µTk+1 Σ−1k+1

=

X0kΣ−1k+1−IkµTkΣ−1k+1−Ik∆µTk+1Σ−1k+1 Xnk+1Σ−1k+1−Ink+1µTk+1Σ−1k+1

(3.2)

where, Σj = diag(σj.1, ..., σj.m), j = k, k + 1 and ∆µk+1 = µk+1 −µk. Following similar lines of development, the it is easy to recognize that the recursive calculation of the covariance matrix has the following form [88]:

(Nk+1−1)Ωk+1 =XTk+1Xk+1−(Nk−1)Σ−1k+1ΣkkΣkΣ−1k+1+NkΣ−1k+1∆µk+1∆µTk+1Σ−1k+1+XTn

k+1Xnk+1 (3.3) For data evolving from structural systems, the recursive update of the sample covariance matrix requires only rank-one modification. In this dissertation, the covariance update is updated at each sample point, instead of updating the entire model, thereby relatively reducing the associated compu- tational effort, for real time monitoring of structures. As previously mentioned, the FOEP technique TH-1989_156104031

transforms the batch mode operations into recursive implementation by providing eigenspace up- dates of the covariance matrix at each time point, that forms the sole basis of the perturbation associated with rank-one modification. The recursive estimation shown in Eqn. 3.3 can be reduced to:

k+1 = k

k+ 1Σ−1k+1ΣkkΣkΣ−1k+1−1k+1∆µk+1∆µTk+1Σ−1k+1+ 1

k+ 1Xk+1XTk+1 (3.4) For structural systems in general, data evolves from zero mean processes. Scaling the data to a unit variance, Eqn. 3.4 can be rewritten as:

k+1 = k

k+ 1Ωk+ 1

k+ 1Xk+1XTk+1 (3.5)

Considering the EVD of the covariance matrix to be of the form Ek+1Λk+1ETk+1, where E and Λ denote the orthonormal eigenvector and diagonal eigenvalue matrices respectively, with αk+1 = ETkXk−1, the following recursive formula is obtained, on substitution in Eqn. 3.5:

Ek+1(k+ 1)Λk+1ETk+1 =Ek[kΛkk+1αTk+1]ETk (3.6)

For finitely large samplesize, the term [kΛkk+1αTk+1] is strongly diagonally dominant, which allows the application of Gershgorin’s theorem [132], establishing the fact that while its eigenvalues will retain a structure close to the diagonal portion [kΛk], the corresponding eigenvectors will be close to identity [88]. Interested readers are also referred to Appendix D for detailed derivation of the theorem and its understanding from a structural dynamics point of view.

3.3.1 RPCA: Theoretical development using POMs

As a combination of detailed theoretical derivation for the basic PCA methodology, let the LNMs be represented asV. Let the POMs be represented by Wthat are expressed as a sum of LNMs and error terms. Therefore, the expression W=V+ε holds good. Since the relationX =VQ is valid, it can be inferred that Q=VTX.

TH-1989_156104031

Therefore, the POMs can be expressed as:

Ψ=WTX =Q+Γ (3.7)

whereQ is the modal ensemble matrix and Γ is the matrix containing the error terms. Substituting X =WΨ, the covariance matrix of the physical responses can be obtained as:

R= 1

NXXT = 1

NW[Q+Γ][Q+Γ]TWT (3.8)

Equation 3.8 expresses the POCs of the covariance matrix of the acceleration response X in terms of the LNMs and errors. The basic RPCA equation [88] can be written as:

Rk= k−1

k Rk−1+ 1

kXkXTk (3.9)

where Rk and Xk are the covariance matrix and the matrix of the data points at the kth instant, respectively; and Rk−1 denotes the covariance matrix at the (k−1)th instant. The covariance estimate Rk can be expressed as an eigen decomposition as shown:

Rk=WkkWTk (3.10)

Thus for (k−1)th data point the eigenvalue decomposition ofRk−1 can be expressed as,Rk−1 = Wk−1k−1Wk−1T , and the gain depth parameter βk can be defined as βk=WTk−1Xk

On substituting the value of gain depth parameter and the covariance estimate in Eqn. 4.5, the following expressions can be obtained

Wk(kΩk)WTk =Wk−1{(k−1)Ωk−1kβkT}WTk−1 (3.11)

For the RPCA algorithm to be stable and robust, it is important that the term{(k−1)Ωk−1kβkT} is diagonally dominant, which can be demonstrated by expandingΩk−1 in terms of LNMs and error

TH-1989_156104031

terms as follows:

k−1 = [Qk−1T

Qk−1+Qk−1ΓTk−1+Qk−1T

Γk−1k−1ΓTk−1] (3.12)

From equation 3.12, it is clear that Ωk−1 can be understood as a sum of QTQ and the first order error terms [88]. AsN → ∞, QTQ is approximately diagonal for systems having mild to moderate damping under sufficiently broadband excitations. Eqn. 2.49 forms the basis of establishing diag- onal dominance of the Ωk−1 matrix. Gershgorin’s theorem can now be applied on the diagonally dominant matrix which provides recursive eigen space updates using perturbation techniques at each point in time. For dynamical systems of different order (such as chemical systems), [88] the above equations would not hold true and the concept of diagonal dominance has to be enforced upon, for the application of Gershgorin’s theorem. Hence for a structural system, the recursive eigen space update is obtained using a first order perturbation (FOEP) approach [133, 135], which provides a less computationally intensive algorithm in a recursive framework for the eigen value decomposition (TkΛkTTk) of the term (k−1)Ωk−1kβkT, yielding the following iterative update equations:

Wk =Wk−1Tk

k= Λkk





(3.13)

Eqn. 5.14 provides an iterative relation between eigen spaces at consecutive time instants. On using the FOEP approach, the recursive eigen vectors obtained at each time instant are not ordered in the same sequence as the previous time instant, thus presenting the problem of permutation ambiguity [68]. This can be resolved by arranging the basis vectors according to the decreasing order of the corresponding eigenvalues in Ωk.