• Tidak ada hasil yang ditemukan

4.4 Kalman filtering overview

4.4.3 Filter structure

As mentioned in Section 4.4.1, the Kalman filter is characterised by a set of recursive equations that estimate the instantaneous state of a dynamic system by minimising the estimation variance. The filter equation discussion, below, follows from Brown and Hwang (1992).

Firstly, consider the following mathematical description of a linear dynamic system to be estimated.

The Kalman filter addresses the challenge of determining the state

x

k ϵ Rn of the system. Assume that the stochastic process to be estimated can be modelled discretely as:

k k k 1

k

Α x w

x

+

= +

4.18

And assume that observations of the process,

z

k ϵ Rm, are generated in accordance with:

k k k

k

H x v

z = +

4.19

Table 4.1 provides descriptions of the terms used in Equations 4.18 and 4.19.

Table 4.1. Dynamic system symbol definitions.

Variable/Symbol Dimension Description

k ϵ Z Sub script k indicates variable as at time

t

k.

Α

k ϵ Rn x n Matrix defining transition of

x

k from time

t

k to

t

k+1 i.e.

x

k to

x

k+1 in the absence of any forcing function.

w

k ϵ Rn x 1 Vector of white noises with known covariance matrix

Q

k.

H

k ϵ Rm x n Matrix defining noiseless relationship between

measurements,

z

k, and states,

x

k.

v

k ϵ Rm x 1 Vector of white noises defining observation errors with know

covariance (

R

k) and zero cross correlation with

w

k.

Further characterisation of

w

k and

v

k is given by the respective covariance matrices below:

 

= =

k i

k i

E

T

Q

k

, 0 ] ,

[ w

k

w

i 4.20

 

= =

k i

k i

E

T

Q

k

, 0 ] ,

[ w

k

w

i 4.21

k i

E [ w

k

v

iT

] = 0 , ∀ ,

4.22

To begin the recursive computation of the Kalman filter, an initial estimate of the process incorporating all knowledge up to the starting point is required. This a priori estimate will be denoted xˆk where the “^” denotes estimate and the “-“ implies a priori estimate i.e. This is the best estimate without including the information provided by the measurement at

t

k. Additionally, it is assumed that xˆk has known error covariance matrix denoted by:

] ˆ ) ˆ )(

[(

]

[

T

E

T

E

=

k k

=

k

k k

k

k

e e x x x x

P

4.23

In Equation 4.23, the estimation error has been implicitly defined by Equation 4.24 having zero mean.

ˆ )

(

=

k

k

k

x x

e

4.24

With the a priori assumptions above, the measurement, zk, can now be assimilated into an improved estimate for the state using Equation 4.25.

ˆ ) ˆ (

ˆ

k

= x

k

+ K

k

z

k

H

k

x

k

x

4.25

In Equation 4.25, a linear blending of the noisy measurement and the a priori estimate via weighting factor

K

k resulted in an optimal updated estimate, xˆk. Equation 4.25 is developed in Brown (1992) (Section 5.6). The optimality of the updated estimate lies solely in the choice of

K

k

which should be calculated to minimise the mean square estimation error. To arrive at an expression for

K

k, it is necessary to determine the error covariance associated with the updated estimate:

] ) ˆ )(

ˆ [(

]

[

T

E

T

E

k k k k k k

k

e e x x x x

P = = − −

4.26

Substituting Equation 4.19 into Equation 4.25, and then using the resulting expression for xˆk in Equation 4.26 gives:

[ ]

{ [ ]

T

}

E

ˆ ) (

ˆ ) (

ˆ ) (

ˆ ) (

− +

×

− +

=

k k k k k k k

k

k k k k k k k

k k

x H v v H K x x

x H v v H K x x P

4.27

Equation 4.27 can be simplified to Equation 4.28, which represents a general expression for the updated estimate error covariance, by completing the expectation using the fact that the a priori estimation error, (xkxˆk), is uncorrelated to

v

k.

T T

k k k k

k k

k k

k

I K H P I K H K R K

P = ( − )

( − ) +

4.28

The optimisation problem can now be described in terms of Equation 4.28. It essentially involves minimising the individual terms along the major diagonal of

P

k (these terms are the estimation error variances for the elements of the state vector being estimated). Analytically, this is achieved with differential calculus by differentiating the trace of

P

k with respect to

K

k (Equation 4.29).

)

1

( ) 0

(

+

=

∂ =

R

k

H P H H P K

K P

k k k k k k

k k

T T

tr

4.29

The trace of

P

k is used since it can be argued that minimisation of the error sum implies that individual errors are also minimised. When Equation 4.29 is used, the blending factor

K

k is

referred to as the Kalman gain and the covariance matrix associated with the optimal estimate is calculated recursively via:

=

k k

k

I K H P

P (

k

)

4.30

A method for including the information embedded in

z

k now exists via Equation 4.25 and the Kalman gain. However, for the relevant expressions to be solved, xˆk and Pˆk were needed and, due to the recursive nature of the algorithm, a similar requirement exists to assimilate the data provided by measurement zk+1 i.e. for state estimate prediction at

t

k+1, xˆk+1 is needed as is its associated covariance matrix, Pk+1. The a priori state estimate prediction is found by projecting the current updated estimate through the state transition matrix in the absence of noise (which is valid as noises here were considered to be zero mean and additive).

k k 1

k

Α x

x ˆ

+

= ˆ

4.31

In finding Pk+1, first form the expression for the a priori error:

k k k

k k k k k

1 k 1 k 1 k

w e Α

x Α w x Α

x x e

+

=

− +

=

=

+ +

+

) ˆ (

ˆ

4.32

From Equation 4.32,

( )

k 1 k k k k 1

k 1

k

e e Α P Α Q

P

+

= E [

+ + T

] =

T

+

4.33

With Equations 4.31 and 4.33, the required a priori quantities are available, and

z

k+1 can now be used as per Equation 4.25.

Grewal and Andrews (2001) describes the relationship between the discrete time system and the Kalman filter by Figure 4.15.

xk zk

xˆk xˆk1

1

wk

1

Αk z1

Hk

Σ Σ Kk

Hk

Σ

1

z 1

Αk

vk

xˆk

Σ +

+ +

+ +

- +

+

Discrete system Measurement Discrete Kalman filter

Copy of discrete system

Figure 4.15. Conceptual relationship between discrete time system and Kalman filter.

Figure 4.16 further summarises the Kalman filter recursive equations with respect to the computational flow presented in Figure 4.14.

Compute blending factor using predicted estimation covariance and present

measurement noise covariance.

Update estimates as linear combination of prediction and present measurement using

blending factor for weighting.

Compute covariance associated with updated estimate.

Predict estimate for next time step and associated covariance.

Updated estimate Updated estimate covariance

2.

3.

4.

5.

1. Initialisation

measurementNew

Filter loop ) 0 ( =

t

Pk xˆk(t=0)

)1

(

+

=P H H P H Rk

Kk k kT k k kT

ˆ ) ˆ (

ˆk= xk+Kk zkHkxk x

= k k

k I K H P

P ( k)

k k 1

k Α x

xˆ+ = ˆ

k k k k 1

k Α Q

P+ = T+

Figure 4.16. Kalman filter recursive equations (revised computational flow in terms of Figure 4.14).

Focussing on the filter operation between two successive time steps of the infinite loop of Figure 4.16, the transition diagram below (Figure 4.17) can be developed (Grewal and Andrews, 2001).

Here:

1. Transitions between quantities are indicated by arrows.

2. Parentheses adjacent to arrows indicate variables effecting transition defined by the arrow.

k time

t tk+1

1

Pk

) (Αk1,Qk1 1

xˆk

(

Αk1

)

xˆk

xˆk ) (zk,Hk,Kk

Pk

Pk

) , (Rk Hk

1

xˆk+

+1

xˆk

) , ,

(zk+1 Hk+1 Kk+1

+1

Pk

1

Pk+

) , (Rk+1 Hk+1

( )

Αk

) , (Αk Qk

+2

xˆk

+2

Pk

(

Αk+1

)

) , (Αk+1 Qk+1

Figure 4.17. Filter variable transition from step k to k + 1.

It should be apparent that each variable to be estimated assumes two distinct values during the discrete epoch – an a priori value before any measurement information is incorporated and an updated value after the measurement is assimilated.