• Tidak ada hasil yang ditemukan

Exponential state estimation for impulsive neural networks with time delay in the leakage term

N/A
N/A
Protected

Academic year: 2023

Membagikan "Exponential state estimation for impulsive neural networks with time delay in the leakage term"

Copied!
17
0
0

Teks penuh

(1)

R E S E A R C H A RT I C L E

Xilin Fu · Xiaodi Li · Haydar Akca

Exponential state estimation for impulsive neural networks with time delay in the leakage term

Received: 26 September 2010 / Accepted: 17 July 2012 / Published online: 17 October 2012

© The Author(s) 2012. This article is published with open access at Springerlink.com

Abstract In this paper, the exponential state estimation problem for impulsive neural networks with both leakage delay and time-varying delays is investigated. Several sufficient conditions which are given in terms of linear matrix inequalities (LMIs) are derived to estimate the neuron states such that the dynamics of the estimation error is globally exponentially stable by constructing suitable Lyapunov–Krasovskii functionals and employing available output measurements and LMI technique. The obtained results are dependent on the leakage delay and upper bound of time-varying delays and independent of the derivative of time-varying delays. Moreover, some comparisons are made to contrast to some of existing results about state estimation of neural networks. Finally, two numerical examples and their computer simulations are given to show the effectiveness of proposed estimator.

Mathematics Subject Classification34K45·34D23·92B20

1 Introduction

In the past several years, various kinds of neural networks such as Hopfield neural networks, cellular neu- ral networks, Cohen–Grossberg neural networks, and bidirectional associative memory neural networks have received a great deal of attention due to their extensive applications in the fields of signal processing, classifi- cation, fixed-point computation, pattern recognition, and combinatorial optimization, see [10,11,14,16,17]. In particular, delayed neural networks have been deeply investigated since time delay is ubiquitous in biological X. Fu·X. Li (

B

)

School of Mathematical Sciences, Shandong Normal University, Jinan 250014, People’s Republic of China E-mail: [email protected]

X. Fu

E-mail: [email protected] H. Akca

Department of Mathematics, College of Arts and Science Faculty, Abu Dhabi University, P.O. Box 59911, Abu Dhabi, United Arab Emirates

E-mail: [email protected]

(2)

and artificial neural networks and sometimes its existence might be the source of instability, oscillations, or other poor performance behavior [4,31]. Up to now, many interesting results on dynamics of neural networks with various delays have been reported, see, for instance, [1,28] with constant delays, [2,7,18,20,22,37,46]

with time-varying delays, and [26,36,39,42] with distributed delays.

Recently, Gopalsamy [12] pointed out that in real nervous systems, time delay in the stabilizing negative feedback terms has a tendency to destabilize a system (This kind of time delays are known as leakage delays or “forgetting” delays). Moreover, sometimes it has more significant effect on dynamics of neural networks than other kinds of delays. Hence, it is of significant importance to consider the leakage delay effects on dynamics of neural networks. However, due to some theoretical and technical difficulties, there has been very little existing work on neural networks with leakage delays [13,19,27,35]. In [13], Gopalsamy initially investigated the bidirectional associative memory (BAM) neural networks with leakage constant delays and obtained some sufficient conditions to guarantee the existence and global stability of a unique equilibrium using Lyapunov–Kravsovskii functionals and M-matrices method. Then based on this work, Peng [35] fur- ther studied the existence and global stability of periodic solutions for BAM neural networks with leakage continuously distributed delays using continuation theorem in coincidence degree theory and the Lyapunov functional.

On the other hand, the problem of the state estimator for various neural networks has received much attention recently, see [9,15,25,29,32–34,38,40,41]. The motivation for the investigation of the neuron state estimation is that the neuron states are not often fully available in the network outputs in many applications; one may estimate the neuron states through available measurement such that the estimated ones can be applied to real problems effectively [40]. In particular, Park et al. [32–34] investigated the state estimation of neutral-type neural networks with time-varying delays and/or interval time-varying delays using Lyapunov–Kravsovskii functionals and linear matrix inequality (LMI) approach. Via the same methods, Li and Fei [25] studied the state estimation of neural networks with distributed delays. In [29], Mahmoud studied the state estimation of neural networks with interval time-varying delays through Luenberger-type linear estimator, which improves and extends the previous ones in [33,34,40]. However, most of the results [9,15,25,29,32–34,40,41] are based on the assumption that the time-varying delays is differentiable, which greatly reduce the applied range of those results in practice. More recently, Wang and Song [38] improved the issue and established some LMIs conditions to estimate the neuron state of mixed delayed neural networks in which the time-varying delays are non-differentiable. Unfortunately, all of those results [9,15,25,29,32–34,38,40,41] cannot be applied to neural networks with leakage delays. In addition, it is well known that impulsive effects are likely to exist in the neural network systems [6,23,24,30,43–45]. For instance, in the implementation of the electronic networks, the state of neuron is subject to instantaneous perturbations and experiences abrupt change at certain moments, which may be caused by switching phenomenon, frequency change or other sudden noise, that is, does exhibit impulsive effects [3,21]. Hence, it is necessary to consider the impulsive effects to the state estimation problem of neural networks with delays to reflect a more realistic dynamics. However, to the best of our knowledge, there are no results on the state estimation problem of impulsive neural networks with leakage delays.

In this paper, we consider the state estimation problem for a class of impulsive neural networks with leakage delay by constructing appropriate Lyapunov–Krasovskii functional and employing available output measure- ments. Several LMI-based conditions are derived to estimate the neuron states such that the estimation error system exponentially tends to zero. Compared with the previous results, the main advantages of the obtained results include the following:

• It can be applied to the state estimation problem of neural networks with leakage delays and/or impulsive effects.

• In [9,15,25,29,32–34,38,40,41], the gain matrix K of the state estimator is mostly given in form of K = Q−1Y,whereQ denotes a positive definite matrix (Y denotes an available matrix). This restriction will be relaxed in our results. We only require the reversibility of matrixQto estimate the gain matrixK.

• We essentially drop the requirement of differentiability on the time-varying delays, and do not require the activation functions to be differentiable, bounded or monotone nondecreasing, which are less restrictive than those results in [9,15,25,29,32–34,40,41].

• Most of the numerical simulations in [9,15,25,29,32–34,38,40,41] show a fact that estimation error sys- tem tends to zero under the help of gain matrix. But from computer simulations, one may observe that the estimation error system in those papers still tends to zero even without the help of gain matrix (i.e., when K=0 ). It is an undesirable phenomenon. This problem will be solved in this paper.

(3)

• The LMI conditions in this paper contain all the information of neural networks including physical param- eters of neural networks, leakage delay, time-varying delays, and impulsive matrix, which can be checked easily and quickly. Moreover, this paper initially considers the impulsive effects on state estimation problem of neural networks.

The rest of this paper is organized as follows: In Sect.2, the state estimation problem is formulated. In Sect.3, by constructing suitable Lyapunov functionals, we shall derive some LMI-based conditions to estimate the neuron states. Two numerical examples and their computer simulations are given to show the effectiveness of proposed estimator in Sect.4. Finally, we shall make concluding remarks in Sect.5.

2 Preliminaries

Notations.LetR(R+)denote the set of (positive) real numbers,Z+denote the set of positive integers,Rn andRn×m denote then-dimensional andn×m-dimensional real spaces equipped with the Euclidean norm

|| • ||.A >0 orA < 0 denotes that the matrixA is a symmetric and positive definite or negative definite matrix. The notationAT andA−1mean the transpose ofA and the inverse of a square matrix.λmax(A)or λmin(A)denotes the maximum eigenvalue or the minimum eigenvalue of matrixA.[•]denotes the integer function.I denotes the identity matrix with appropriate dimensions and= {1,2, . . . ,n}. For any interval J ⊆R, setS⊆Rk(1≤kn),C(J,S)= {ϕ: JV is continuous} andPC1(J,S)= {ϕ: JSis con- tinuously differentiable everywhere except at finite number of pointst, at whichϕ(t+), ϕ(t),ϕ(˙ t+)andϕ(˙ t) exist andϕ(t+)= ϕ(t),ϕ(˙ t+) = ˙ϕ(t),whereϕ˙ denotes the derivative ofϕ}. Forϕ(·) = 1, . . . , ϕn)T

PC1([−ρ,0],Rn), the norm is defined byϕρ =max

supρs≤0ϕ(s), supρs≤0 ˙ϕ(s)

.In addi- tion, the notationalways denotes the symmetric block in one symmetric matrix.

Consider the following impulsive neural networks model with leakage delay:

x˙(t)= −Ax(tσ )+B f(x(t))+W f(x(tτ(t)))+J(t), t >0, t =tk, x(tk)x(tk)= −Dk

x(tk)Atk

tkσx(u)du

, k∈Z+, (1)

where x(t) = (x1(t), . . . ,xn(t))T ∈ Rn is the neuron state vector of the neural networks; A =diag (a1, . . . ,an) ∈ Rn×n is a diagonal matrix with ai > 0,i; B ∈ Rn×n and W ∈ Rn×n are the con- nection weight matrix and the delayed weight matrix, respectively; J(t)is an external input; f(x(·)) = (f1(x1(·)), . . . , fn(xn(·)))T denotes the neuron activation function and Dk ∈ Rn×n,k ∈ Z+ denotes the impulsive matrix.

In this paper, we make the following assumptions:

(H1)The neuron activation function fj,j,are continuous onRand satisfy ljfj(u)fj(v)

uvl+j, for anyu, v∈R, u=v,j, wherelj andl+j are some real constants and they may be positive, zero or negative.

(H2)The leakage delayσ ≥0 is a constant.

(H3)The transmission delayτ(t)is time-varying and satisfies 0≤τ(t)τ,whereτis a positive constant.

(H4)The impulse timestksatisfy 0=t0<t1<· · ·<tk → ∞and infk∈Z+{tktk−1}>0. As usual, suppose that the output form of System (1) to be of the form

y(t)=C x(t)+h(t,x(t)), (2) where y(t)=(y1(t), . . . ,ym(t))T ∈Rmdenotes the measurement output of System (1);C ∈Rm×ndenotes a constant matrix, and h(t,x(t)) ∈ Rm is the nonlinear disturbance and satisfies the following Lipschitz condition:

h(t,u)h(t, v) ≤ H(uv) for anyu, v∈Rn, (3) whereHis a known constant matrix.

(4)

Now we introduce the following full-order state estimation to estimate the neuron state of (1):

⎧⎪

⎪⎩

˙ˆ

x(t)= −Axˆ(tσ )+B f(xˆ(t))+W f(xˆ(tτ(t)))+J(t) +K

y(t)Cxˆ(t)h(t,xˆ(t))

, t >0, t =tk, ˆ

x(tk)− ˆx(tk)= −Dk

xˆ(tk)Atk

tkσ xˆ(u)du

, k∈Z+,

(4)

where xˆ(t) ∈ Rn is the state estimation andK ∈ Rn×m is the gain matrix to be designed. Then lete(t) = x(t)− ˆx(t)be the state estimation error; then by (1), (2) and (4) we get the following error dynamical system:

⎧⎪

⎪⎩

˙

e(t)= −Ae(tσ )+Bg(e(t))+W g(e(tτ(t)))KCe(t)

KH(t,e(t)), t >0, t =tk, e(tk)e(tk)= −Dk

e(tk)Atk

tkσe(u)du

, k∈Z+, (5)

whereg(e(·))= f(e(·)+ ˆx)f(xˆ)andH(t,e(·))=h(t,e(·)+ ˆx)h(t,xˆ).

Before giving the main results, we need introduce the following lemmas:

Lemma 2.1 ([20])Given any real matrix M = MT >0of appropriate dimension, two scalar a,b:a <b and a vector functionω(·): [a,b] →Rn, such that the integrations concerned are well defined; then

b a

ω(s)ds

T

M

b a

ω(s)ds

⎦≤(ba) b a

ωT(s)Mω(s)ds.

Lemma 2.2 ([18])For a given matrix S=

S11 S12 S21 S22

>0,

where S11T =S11,S22T =S22, is equivalent to any one of the following conditions:

(1) S22>0, S11S12S22−1S12T >0; (2) S11>0, S22S12T S11−1S12>0.

3 Main results

In this section, we will establish some LMI conditions for the existence of exponential state estimator by using Lyapunov–Krasovskii functional and linear matrix inequality (LMI) approach.

Theorm 3.1 Assume that assumptions(H1)–(H4)hold. If there exist three constantsε >0, α > 0, β > 0, three n×n matrices P>0,Q1>0,Q2>0,an n×n inverse matrix Q3, an n×m real matrix Y,two n×n diagonal matrices U1>0,U2>0,and a2n×2n matrix

T11T12 T22

>0such that P (IDk)TP

P

≥0, k∈Z+ (6)

and

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

11 12 αT12T 14 15 16 Q3WY Q2eεσ 0 −A Q3TA P A 0 0 0

33 0 0 0 U22 0

44P A Q3B Q3WY

55 0 0 0

U1 0 0

U2 0

βI

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0, (7)

(5)

where

11 = εP −2P A +σ2Q1 + Q2Y CCTYTU11+βHTH, 12 = P AQ3A, 14 = PQ3CTYT, 15= A P AεP A, 16=U12+Q3B, 22= −Q2eεσQ3AAQT3, 33= τT11αT12TαT12U21, 44=α2(eετ−1)/εT22Q3QT3, 55=εA P AQ1eεσ, 1=diag (l1l1+, . . . ,lnln+), 2=diag

(l1+l+1)/2, . . . , (ln+ln+)/2 .

Then the error system(5)is globally exponentially stable with a convergence rate0.5ε. Moreover, the gain matrixKof the state estimator(4)is given by

K=Q−13 Y. Proof Construct a Lyapunov–Krasovskii functional as follows:

V(t,e(t))=V1(t,e(t))+V2(t,e(t))+V3(t,e(t))+V4(t,e(t))+ +V5(t,e(t)), (8) where

V1(t,e(t))=eεt

e(t)A t tσ

e(u)du

T

P

e(t)A t tσ

e(u)du

,

V2(t,e(t))=σ t tσ

t s

eεueT(u)Q1e(u)duds,

V3(t,e(t))= t tσ

eεseT(s)Q2e(s)ds,

V4(t,e(t))= t 0

eεu u uτ(u)

e(uτ(u)) αe˙(s)

T T11T12 T22

e(uτ(u)) αe˙(s)

dsdu,

V5(t,e(t))=α2 0

τ

t t+u

eε(su)e˙T(s)T22e˙(s)dsdu.

Calculating the time-derivative ofV along the solution of (5) at the continuous interval[tk−1,tk),k∈Z+,one can deduce that

eεtD+V1=ε

e(t)A t tσ

e(u)du

T

P

e(t)A t tσ

e(u)du

+2

e(t)A t tσ

e(u)du

T

P

˙

e(t)Ae(t)+Ae(tσ )

=εeT(t)Pe(t)−2εeT(t)P A t tσ

e(u)du

+ε

t tσ

e(u)du

T

A P A t tσ

e(u)du+2eT(t)Pe˙(t)

(6)

−2eT(t)P Ae(t)+2eT(t)P Ae(tσ )−2

t tσ

e(u)du

T

A Pe˙(t)

+2

t tσ

e(u)du

T

A P Ae(t)−2

t tσ

e(u)du

T

A P Ae(tσ ). (9) It follows from Lemma2.1that

eεtD+V2=σ2eT(t)Q1e(t)σeεt t tσ

eεueT(u)Q1e(u)du

σ2eT(t)Q1e(t)σeεt t tσ

eε(tσ )eT(u)Q1e(u)du

σ2eT(t)Q1e(t)eεσ

t tσ

e(u)du

T

Q1

t tσ

e(u)du

. (10)

By directly computing the time-derivative ofV3,V4andV5, it yields

eεtD+V3=eT(t)Q2e(t)eεσeT(tσ )Q2e(tσ ), (11) eεtD+V4=

t tτ(t)

e(tτ(t)) αe˙(s)

T T11 T12 T22

e(tτ(t)) α˙e(s)

ds

=τ(t)eT(tτ(t))T11e(tτ(t))+2αeT(t)T12Te(tτ(t))

−2αeT(tτ(t))T12Te(tτ(t))+α2 t tτ(t)

˙

eT(s)T22e˙(s)ds

eT(tτ(t))

τT11−2αT12T

e(tτ(t)) +2αeT(t)T12Te(tτ(t))+α2

t tτ

˙

eT(s)T22e˙(s)ds, (12)

eεtD+V5=α2eετ−1

ε e˙T(t)T22e˙(t)α2 0

τ

˙

eT(t+u)T22e˙(t+u)du

=α2eετ−1

ε e˙T(t)T22e˙(t)α2 t tτ

˙

eT(s)T22e˙(s)ds. (13) It is well known that for anyn×ndiagonal matricesU1>0,U2>0,the following inequalities hold:

e(t) g(e(t))

T

U11 U12

U1

e(t) g(e(t))

≥0, (14) e(tτ(t))

g(e(tτ(t))) T

U21 U22

U2

·

e(tτ(t)) g(e(tτ(t)))

≥0. Moreover, it follows from (5) that

0=2e˙T(t)Q3

−˙e(t)Ae(tσ )+Bg(e(t))+W g(e(tτ(t)))

(7)

KCe(t)KH(t,e(t))

= −2e˙T(t)Q3e˙(t)−2e˙T(t)Q3Ae(tσ )+2e˙T(t)Q3Bg(e(t)) +2e˙T(t)Q3W g(e(tτ(t)))−2e˙T(t)Q3KCe(t)

−2e˙T(t)Q3KH(t,e(t)) (15) and

0=2eT(t)Q3

−˙e(t)Ae(tσ )+Bg(e(t))+W g(e(tτ(t)))

KCe(t)KH(t,e(t))

= −2eT(t)Q3e˙(t)−2eT(t)Q3Ae(tσ )+2eT(t)Q3Bg(e(t)) +2eT(t)Q3W g(e(tτ(t)))−2eT(t)Q3KCe(t)

−2eT(t)Q3KH(t,e(t)). (16) In addition, from (3) it can be deduced that

HT(t,e(t))H(t,e(t))= H(t,e(t))2= h(t,e(·)+ ˆx)h(t,xˆ)2

≤ He(t)2=eT(t)HTHe(t). (17) Now considering (9)–(17) and employing Lemma2.2, one obtains that

eεtD+V

εeT(t)Pe(t)−2εeT(t)P A t tσ

e(u)du

+ε

t tσ

e(u)du

T

A P A t tσ

e(u)du+2eT(t)Pe˙(t)

−2eT(t)P Ae(t)+2eT(t)P Ae(tσ )−2

t tσ

e(u)du

T

A Pe˙(t)

+2

t tσ

e(u)du

T

A P Ae(t)−2

t tσ

e(u)du

T

A P Ae(tσ )

+σ2eT(t)Q1e(t)eεσ

t tσ

e(u)du

T

Q1

t tσ

e(u)du

⎦ +eT(t)Q2e(t)eεσeT(tσ )Q2e(tσ )+2αeT(t)T12Te(tτ(t)) +eT(tτ(t))

τT11−2αT12T

e(tτ(t))−2e˙T(t)Q3e˙(t) +α2eετ −1

ε e˙T(t)T22e˙(t)−2e˙T(t)Q3Ae(tσ )+2e˙T(t)Q3Bg(e(t)) +2e˙T(t)Q3W g(e(tτ(t)))−2e˙T(t)Q3KCe(t)+2eT(t)Q3Bg(e(t))

−2e˙T(t)Q3KH(t,e(t))−2eT(t)Q3e˙(t)−2eT(t)Q3Ae(tσ ) +2eT(t)Q3W g(e(tτ(t)))−2eT(t)Q3KCe(t)

−2eT(t)Q3KH(t,e(t)) +

e(t) g(e(t))

T

U11 U12

U1

e(t) g(e(t))

+

e(tτ(t)) g(e(tτ(t)))

T

U21 U22

U2

·

e(tτ(t)) g(e(tτ(t)))

(8)

=eT(t)

εP−2P A+σ2Q1+Q2−2Q3KCU11+βHTH e(t) +2eT(t)

P AQ3ACTKTQT3

e(tσ )+2eT(t)αT12Te(tτ(t)) +2eT(t)

PQ3CTKTQ3T

˙

e(t)+2eT(t)

U12+Q3B g(e(t)) +2eT(t)

A P AεP At

tσ

e(u)du+2eT(t)Q3W g(e(tτ(t)))

−2eT(t)Q3KH(t,e(t))+eT(tσ )

Q2eεσ−2Q3A

e(tσ ) +2eT(tσ )

AQT3Q3

e˙(t)−2eT(tσ )A P A t tσ

e(u)du +2eT(tσ )Q3Bg(e(t))+2eT(tσ )Q3W g(e(tτ(t)))

−2eT(tσ )Q3KH(t,e(t))+ ˙eT(t)

α2(eετ−1)/εT22−2Q3

˙ e(t) +eT(tτ(t))

τT11−2αT12TU21

e(tτ(t))

+2eT(tτ(t))U22g(e(tτ(t)))−2e˙T(t)P A t tσ

e(u)du

+2e˙T(t)Q3Bg(e(t))+2e˙T(t)Q3W g(e(tτ(t)))−2e˙T(t)Q3KH(t,e(t)) +

t tσ

e(u)du

T

εA P AQ1eεσt

tσ

e(u)du

gT(e(t))U1g(e(t))gT(e(tτ(t)))U2g(e(tτ(t)))

βHT(t,e(t))H(t,e(t))

=ζT(t)ζ(t), where

=

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

11 12 αT12T 14 15 16 Q3WQ3K Q2eεσ 0 −AQT3A P A 0 0 0

33 0 0 0 U22 0

44P A Q3B Q3WQ3K

55 0 0 0

U1 0 0

U2 0

βI

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦ ,

ζ(t)=

e(t),e(tσ ),e(tτ(t)),e˙(t), t tσ

e(s)ds,g(e(t)),g(e(tτ(t))),H(t,e(t))

T

. LetY = Q3K,then the matrixcan be written as:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

11 12 αT12T 14 15 16 Q3WY

Q2eεσ 0 −AQT3A P A 0 0 0

33 0 0 0 U22 0

44P A Q3B Q3WY

55 0 0 0

U1 0 0

U2 0

βI

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦ .

(9)

Under the condition (7) of the theorem, it yields

eεtD+V ≤0, t ∈ [tk−1,tk), k∈Z+. (18) On the other hand, from System (5) we know

e(tk)A

tk

tkσ

e(u)du=e(tk)Dk

e(tk)A

tk

tkσ

e(u)du

A

tk

tkσ

e(u)du

=(IDk)

e(tk)A

tk

tkσ

e(u)du

. (19)

Moreover, it follows from (6) that

P (IDk)TP

P

≥0

I(IDk)T

0 I

P (IDk)TP

P

·

I 0

(IDk) I

≥0 (20)

P(IDk)TP(IDk) 0

P

≥0

P(IDk)TP(IDk)≥0. Together with (19) and (20), it yields

V1(tk)

=

e(tk)A

tk

tkσ

e(u)du

T

P

e(tk)A

tk

tkσ

e(u)du

=

e(tk)A

tk

tkσ

e(u)du

T

(IDk)TP(IDk)

×

e(tk)A

tk

tkσ

e(u)du

e(tk)A

tk

tkσ

e(u)du

T

P

e(tk)A

tk

tkσ

e(u)du

=V1(tk), which implies that

V(tk)V(tk), k∈Z+. (21) By (18) and (21), we get

V(t)V(0), t≥0.

(10)

Hence, utilizing Lemma 2.1, we know

t tσ

e(u)du

2

≤ 1 λmin(Q2)

t tσ

e(u)du

T

Q2 t tσ

e(u)du

σ λmin(Q2)

t tσ

eT(s)Q2e(s)ds

σ

λmin(Q2)eε(tσ ) t tσ

eεseT(s)Q2e(s)ds

σ

λmin(Q2)eε(tσ )V(0), which implies that

t tσ

e(u)du

σ

λmin(Q2)V(0)e−0.5ε(tσ ). (22) Similarly, considering the definition ofV1, we can obtain that

e(t)A t tσ

e(u)du

V(0)

λmin(P)e−0.5ε(tσ ), which together with (22) yields

e(t)A

t tσ

e(u)du +

e(t)A t tσ

e(u)du

≤max

i ai

t tσ

e(u)du +

V(0)

λmin(P)e−0.5ε(tσ )

≤max

i ai

σ

λmin(Q2)V(0)e−0.5ε(tσ )+

V(0)

λmin(P)e−0.5ε(tσ )

maxiai

σ λmin(Q2)+

1 λmin(P)

V(0)e−0.5ε(tσ ). (23) Note that

V(0)=

e(0)A 0

σ

e(u)du

T

P

e(0)A 0

σ

e(u)du

+σ 0

σ

0 s

eεueT(u)Q1e(u)duds+ 0

σ

eεseT(s)Q2e(s)ds

+α2 0

τ

0 u

eε(su)e˙T(s)T22e˙(s)dsdu

(11)

≤ 2λmax(P)(1+σ2maxiai)+ [σ2λmax(Q1)+σ λmax(Q2)]1−eσ ε ε + α2λmax(T22)(1−eτε)eτε−1

ε2

!

ϕ2max{σ,τ}<.

Substituting the above inequality to (23), we get

e(t)≤Mϕmax{σ,τ}e−0.5εt, t >0, where

M=e0.5εσ

maxiai

σ λmin(Q2)+

1 λmin(P)

× 2λmax(P)(1+σ2maxiai)

+ [σ2λmax(Q1)+σ λmax(Q2)]1−eσ ε

ε +α2λmax(T22)(1−eτε)eτε−1 ε2

!12 .

Hence, System (5) is globally exponentially stable with a convergence rate 0.5ε and therefore the proof is

completed.

Remark 3.2 One may observe that when calculating the time-derivative ofV1, we do not substitute the right hand of System (5) toD+V1directly, but to complete it by (15) and (16). This ideas plays an important role to design the gain matrixK.In addition, the constructions ofV4andV5can effectively avoid the restrictions on the derivative of time-varying delays. Hence, the results in this paper can be applied to neural networks with non-differentiable time-varying delays. But it should be noted that the criterion given in Theorem3.1is dependent on the leakage delay and the upper bound of time-varying delays.

In particular if we takeσ =0,then networks model (1) becomes

˙

x(t)= −Ax(t)+B f(x(t))+W f(x(tτ(t)))+J(t), t >0, t=tk,

x(tk)x(tk)= −Dkx(tk), k∈Z+. (24) Suppose that the output form of the system (24) is the same as (2). Similarly, we introduce the following full-order state estimation:

⎧⎨

˙ˆ

x(t)= −Axˆ(t)+B f(xˆ(t))+W f(xˆ(tτ(t)))+J(t) +K

y(t)Cxˆ(t)h(t,xˆ(t))

, t>0, t =tk, ˆ

x(tk)− ˆx(tk)= −Dkxˆ(tk), k∈Z+,

(25)

Then the error system between Systems (24) and (25) may be expressed as

⎧⎪

⎪⎨

⎪⎪

˙

e(t)= −Ae(t)+Bg(e(t))+W g(e(tτ(t)))KCe(t)

KH(t,e(t)), t >0, t =tk, e(tk)e(tk)= −Dke(tk), k∈Z+.

(26)

For System (26), we have

Corollary 3.3 Assume that assumptions(H1)–(H4)hold. If there exist three constantsε >0, α >0, β >0, two n×n matrices P>0,Q2>0,an n×n inverse matrix Q3, an n×m real matrix Y,two n×n diagonal matrices U1>0,U2>0,and a2n×2n matrix

T11T12 T22

>0such that P (IDk)TP

P

≥0, k∈Z+ (27)

(12)

and

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

11 12 αT12T 14 15 16 Q3WY

Q2 0 −AQT3A P A 0 0 0

33 0 0 0 U22 0

44P A Q3B Q3WY

εA P A 0 0 0

U1 0 0

U2 0

βI

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

where

11=εP−2P A+Q2Y CCTYTU11+βHTH, 12=P AQ3A, 14= PQ3CTYT, 15= A P AεP A, 16=U12+Q3B, 22= −Q2Q3AAQT3, 33=τT11αT12TαT12U21, 44= α2(eετ−1)/εT22Q3QT3, 1=diag(l1l+1, . . . ,lnl+n), 2=diag

(l1 +l1+)/2, . . . , (ln+ln+)/2 . Then the error system(26)is globally exponentially stable with a convergence rate0.5ε. Moreover, the gain matrixKof the state estimator(25)is given by

K=Q31Y.

Remark 3.4 If the impulsive matricesDkd I,k∈Z+,wheredis a constant, then the LMIs in (6) and (27) can be removed ifd∈ [0,2].

Remark 3.5 The obtained criterion in this paper can be applied to state estimation problem of neural networks with leakage delay and/or impulsive effects. However, the considered leakage delay is a constant one. How to establish the state estimation criterion for neural networks with leakage time-varying delays may be a troublesome issue. In the future, we will do some research on this problem.

4 Numerical examples

Example 4.1 Consider a simple two-dimensional impulsive neural networks model with leakage delays:

⎧⎪

⎪⎨

⎪⎪

˙

x(t)= −Ax(tσ )+B f(x(t))+W f(x(tτ(t)))+J(t), t >0, t =tk, x(tk)x(tk)= −Dk

⎧⎨

x(tk)A

tk

tkσ

x(u)du

⎫⎬

, k∈Z+, (28) and the full-order observer:

⎧⎪

⎪⎪

⎪⎨

⎪⎪

⎪⎪

˙ˆ

x(t)= −Axˆ(tσ )+B f(xˆ(t))+W f(xˆ(tτ(t)))+J(t) +K

y(t)Cxˆ(t)h(t,xˆ(t))

, t >0, t =tk, ˆ

x(tk)− ˆx(tk)= −Dk

⎧⎨

xˆ(tk)A

tk

tkσ

ˆ x(u)du

⎫⎬

, k∈Z+,

(29)

where f1 = f2 = tanh(s), τ(t) = 0.09−0.01[sint], σ = 0.1,Dk = 0.2I, tk = 2k, k ∈ Z+. The disturbance functionh(t,x)=x;And some other parameters are given as follows:

A=

0.3 0 0 0.2

, B=

0.2 −0.32 0.28 0.4

, W =

0.5 0.24

−0.86 0.7

, C =

2 −0.1 0.1 2

, J(t)=

2 sint·cos 0.4t 3 cos 1.5t·sin 0.5t

.

Referensi

Dokumen terkait

We show simulation results with the modular neural network approach, its optimization using genetic algorithms, and the integration with different methods, such as:

They compared the results of this study with those obtained previously, by re-examining the data using neural networks and classification trees, from Enterprise Miner, the SAS

[24], have developed the emotion recognition system based on convolution neural networks with 2 convolutions+2 pooling layers, and using labelled training audio data and used

• To evaluate this method, three deep temporal neural networks, which is state-of-the-art in forecasting future time series data, are utilized for comparing between the models with

Evaluation of artificial neural networks with satellite data inputs for daily, monthly, and yearly solar irradiation prediction for Pakistan ABSTRACT Solar irradiation is the most

Evolutionary integrated heuristic with gudermannian neural networks for second kind of lane–emden nonlinear singular models ABSTRACT In this work, a new heuristic computing design

Artificial neural networks to solve the singular model with Neumann–Robin, Dirichlet and Neumann boundary conditions ABSTRACT The aim of this work is to solve the case study

In this study, four neural networks models have been designed for the analysis and evaluation of service quality in education sector with the inputs like customer expectations,