• 検索結果がありません。

Stochastic Stability of Neural Networks with Both Markovian Jump Parameters and Continuously Distributed Delays

N/A
N/A
Protected

Academic year: 2022

シェア "Stochastic Stability of Neural Networks with Both Markovian Jump Parameters and Continuously Distributed Delays"

Copied!
20
0
0

読み込み中.... (全文を見る)

全文

(1)

Discrete Dynamics in Nature and Society Volume 2009, Article ID 490515,20pages doi:10.1155/2009/490515

Research Article

Stochastic Stability of Neural Networks with Both Markovian Jump Parameters and Continuously Distributed Delays

Quanxin Zhu

1, 2

and Jinde Cao

1

1Department of Mathematics, Southeast University, Nanjing 210096, Jiangsu, China

2Department of Mathematics, Ningbo University, Ningbo 315211, Zhejiang, China

Correspondence should be addressed to Jinde Cao,jdca@seu.edu.cn Received 4 March 2009; Accepted 29 June 2009

Recommended by Manuel De La Sen

The problem of stochastic stability is investigated for a class of neural networks with both Markovian jump parameters and continuously distributed delays. The jumping parameters are modeled as a continuous-time, finite-state Markov chain. By constructing appropriate Lyapunov- Krasovskii functionals, some novel stability conditions are obtained in terms of linear matrix inequalitiesLMIs. The proposed LMI-based criteria are computationally efficient as they can be easily checked by using recently developed algorithms in solving LMIs. A numerical example is provided to show the effectiveness of the theoretical results and demonstrate the LMI criteria existed in the earlier literature fail. The results obtained in this paper improve and generalize those given in the previous literature.

Copyrightq2009 Q. Zhu and J. Cao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. Introduction

In recent years, neural networks especially recurrent neural networks, Hopfield neural networks, and cellular neural networks have been successfully applied in many areas such as signal processing, image processing, pattern recognition, fault diagnosis, associative memory, and combinatorial optimization; see, for example,1–5. One of the best important works in these applications is to study the stability of the equilibrium point of neural networks. A major purpose that is concerned with is to find stability conditions i.e., the conditions for the stability of the equilibrium point of neural networks. To do this, existensive literature has been presented; see, for example, 6–22 and references therein.

It should be noted that the methods in the literature have seldom considered the case that the systems have Markovian jump parameters due to the difficulty of mathematics.

However, neural networks in real life often have a phenomenon of information latching.

(2)

It is recognized that a way for dealing with this information latching problem is to extract finite state representationsalso called modes or clusters. In fact, such a neural network with information latching may have finite modes, and the modes may switchor jumpfrom one to another at different times, and the switchingor jumpingbetween two arbitrarily different modes can be governed by a Markov chain. Hence, the neural networks with Markovian jump parameters are of great significance in modeling a class of neural networks with finite modes.

On the other hand, the time delay is frequently a major source of instability and poor performance in neural networkse.g., see6,23,24, and so the stability analysis for neural networks with time delays is an important research topic. The existing works on neural networks with time delays can be classified into three categories: constant delays, time- varying delays, and distributed delays. It is noticed that most works in the literature have focused on the former two simple cases: constant delays or time-varying delays e.g., see 6, 8–10,12–16,19–22. However, as pointed out in18, neural networks usually have a spatial nature due to the presence of a multitude of parallel pathways with a variety of axon sizes and lengths, and so it is desired to model them by introducing continuously distributed delays on a certain duration of time such that the distant past has less influence than the recent behavior of the state. But discussions about the neural networks with continuously distributed delays are only a few researchers18, 25. Therefore, there is enough room to develop novel stability conditions for improvement.

Motivated by the above discussion, the objective of this paper is to study the stability for a class of neural networks with both Markovian jump parameters and continuously distributed delays. Moreover, to make the model more general and practical, the factor of noise disturbance is considered in this paper since noise disturbance is also a major source leading to instability 7. To the best of the authors’ knowledge, up to now, the stability analysis problem for a class of stochastic neural networks with both Markovian jump parameters and continuously distributed delays is still an open problem that has not been properly studied. Therefore, this paper is the first attempt to introduce and investigate the problem of stochastic stability for a class of neural networks with both Markovian jump parameters and continuously distributed delays. By utilizing the Lyapunov stability theory and linear matrix inequalityLMI technique, some novel delay-dependent conditions are obtained to guarantee the stochastically asymptotic stability of the equilibrium point. The proposed LMI-based criteria are computationally efficient as they can be easily checked by using recently developed standard algorithms such as interior point methods24in solving LMIs. Finally, a numerical example is provided to illustrate the effectiveness of the theoretical results and demonstrate the LMI criteria existed in the earlier literature fail. The results obtained in this paper improve and generalize those given in the previous literature.

The remainder of this paper is organized as follows. In Section 2, the model of a class of stochastic neural networks with both Markovian jump parameters and continuously distributed delays is introduced, and some assumptions needed in this paper are presented.

By means of Lyapunov-Krasovskii functional approach, our main results are established in Section 3. InSection 4, a numerical example is given to show the effectiveness of the obtained results. Finally, inSection 5, the paper is concluded with some general conclusions.

Notation. Throughout this paper, the following notations will be used.Rn andRn×n denote then-dimensional Euclidean space and the set of alln×nreal matrices, respectively.

The superscript “T” denotes the transpose of a matrix or vector. Trace·denotes the trace of the corresponding matrix, andIdenotes the identity matrix with compatible dimensions.

For square matrices M1 and M2, the notationM1 > ≥, <,≤M2 denotes thatM1M2 is

(3)

positive-definitepositive-semidefinite, negative, negative-semidefinitematrix. Letwt w1, . . . , wnT be an n-dimensional Brownian motion defined on a complete probability spaceΩ,F, P with a natural filtration {Ft}t≥0. Also, let τ > 0 andC−τ,0;Rn denote the family of continuous function φ from −τ,0 to Rn with the uniform norm φ sup−τ≤θ≤0|φθ|. Denote by L2F

t−τ,0;Rn the family of all Ft measurable, C−τ,0;Rn- valued stochastic variables ξ {ξθ : −τ ≤ θ ≤ 0} such that 0

−τE|ξs|2ds < ∞, where stands for the correspondent expectation operator with respect to the given probability measureP.

2. Model Description and Problem Formulation

Let{rt, t≥0}be a right-continuous Markov chain on a complete probability spaceΩ,F, P taking values in a finite state space S {1,2, . . . , N} with generatorQ qijN×N given by

P

rt Δt j |rt i

⎧⎪

⎪⎩

qijΔtoΔt, ifi /j, 1qiiΔtoΔt, ifij,

2.1

whereΔt >0 and limΔt→0oΔt/Δt 0. Here,qij ≥0 is the transition rate fromitojifi /j whileqiij /iqij.

In this paper we consider a class of neural networks with both Markovian jump parameters and continuously distributed delays, which is described by the following integro- differential equation:

xt ˙ −Drtxt Artfxt Brtgxtτ

Crt t

−∞RtshxsdsV,

2.2

where xt x1t, x2t, . . . , xntT is the state vector associated with the n neurons, and the diagonal matrixDrt diagd1rt, d2rt, . . . , dnrthas positive entries dirt > 0i 1,2, . . . , n. The matricesArt aijrtn×n,Brt bijrtn×n, and Crt cijrtn×n are, respectively, the connection weight matrix, the discretely delayed connection weight matrix, and the distributively delayed connection weight matrix.

fxt f1x1t, f2x2t, . . . , fnxntT,gxt g1x1t, g2x2t, . . . , gnxntT, and hxt h1x1t, h2x2t, . . . , hnxntT denote the neuron activation functions, andV V1, V2, . . . , VnTdenotes a constant external input vector. The constantτ >0 denotes the time delay, andR R1, R2, . . . , RnT denotes the delay kernel vector, whereRi is a real value nonnegative continuous function defined on0,∞and such that

0 Risds 1 for i1,2, . . . , n.

(4)

In this paper we will investigate a more general model in which the environmental noise is considered on system 2.2, and so this model can be written as the following integrodifferential equation:

dxt

Drtxt Artfxt Brtgxtτ

Crt t

−∞RtshxsdsV

dtσxt, xtτ, t, rtdwt,

2.3

whereσ:Rn×Rn×R×S → Rn×nis the noise perturbation.

Throughout this paper, the following conditions are supposed to hold.

Assumption 2.1. There exist six diagonal matricesU diagu1, u2, . . . , un, U diagu1, u2, . . . , un,V diagv1, v2, . . . , vn,V diagv1, v2, . . . , vn,W diagw1, w2, . . . , wn, andW diagw1, w2, . . . , wnsatisfying

uifiα−fi β αβui, vigiα−gi

β αβvi, wihiα−hi

β αβwi

2.4

for allα, β∈R,α /β,i1,2, . . . , n.

Assumption 2.2. There exist two positive definite matricesΣ1iandΣ2isuch that trace

σT

x, y, t, rt σ

x, y, t, rt

xTΣ1ixyTΣ2iy 2.5

for allx, y∈Rnandrt i,iS.

Assumption 2.3. σ0,0, t, rt≡0.

Under Assumptions2.1 and 2.2, it is well known see, e.g., Mao 16 that for any initial dataxθ ξθon−τ ≤θ≤0 inL2F

t−τ,0;Rn,2.3has a unique equilibrium point.

Now, letx x1, x2, . . . , xnbe the unique equilibrium point of2.3, and setyt xtx. Then we can rewrite system2.3as

dyt

Drtyt Artf

yt

Brtg

ytτ

Crt t

−∞Rtsh

ys

ds

dtσ

yt, ytτ, t, rt dwt,

2.6

(5)

where fy f1y1, f2y2, . . . , fnynT, gy g1y1, g2y2, . . . , gnynT, hy h1y1, h2y2, . . . , hnynT, and fiyi fiyi xifixi, giyi giyi xigixi, hiyi hiyixihixi i1,2, . . . , n.

Noting the facts thatf0 g0 h0 0 andσ0,0, t, rt 0, the trivial solution of system2.6exists. Hence, to prove the stability ofxof2.3, it is sufficient to prove the stability of the trivial solution of system2.6. On the other hand, byAssumption 2.1we have

uifiα−fi β

αβui, 2.7

vigiα−gi β

αβvi, 2.8

wihiα−hi β

αβwi 2.9

for allα, β∈R,α /β,i1,2, . . . , n.

Letyt;ξdenote the state trajectory from the initial data ξθon−τ ≤ θ ≤ 0 inL2F

t−τ,0;Rn. Clearly, system2.6admits a trivial solutionyt; 0≡0 corresponding to the initial dataξ 0. For simplicity, we writeyt;ξ yt. LetC21R×Rn×S;Rdenote the family of all nonnegative functionsVt, y, ionR×Rn×Swhich are continuously twice differentiable inyand differentiable int. IfVC21R×Rn×S;Rn, then along the trajectory of system2.6we define an operatorLV fromR×Rn×StoRby

LV

t, yt, i Vt

t, yt, i Vy

t, yt, i

Drtyt Artf

yt Brtg

ytτ Crt

t

−∞Rtsh

ys

ds

1 2trace

σT

yt, ytτ, t, rt Vyy

t, yt, i σ

yt, ytτ, t, rt

N

j1

qijV

t, yt, j ,

2.10

where

Vt

t, yt, i ∂V

t, yt, i

∂t , Vy

t, yt, i

∂V

t, yt, i

∂y1 , . . . ,∂V

t, yt, i

∂yn

,

Vyy

t, yt, i

2Vt, yt, i

∂yi∂yj

n×n

.

2.11

Now we give the definition of stochastic asymptotically stability for system2.6.

(6)

Definition 2.4. The equilibrium point of2.6 or2.3equivalentlyis said to be stochastic asymptotically stable in the mean square if, for every ξL2F

0−τ,0;Rn, the following equality holds:

t→ ∞limE|yt;ξ|20. 2.12

In the sequel, for simplicity, whenrt i, the matricesCrt,Art, andBrt will be written asCi,AiandBi, respectively.

3. Main Results and Proofs

In this section, the stochastic asymptotically stability in the mean square of the equilibrium point for system2.6is investigated under Assumptions2.1–2.3.

Theorem 3.1. Under Assumptions2.1–2.3, the equilibrium point of 2.6(or2.3equivalently) is stochastic asymptotically stable, if there exist positive scalarsλi, positive definite matricesG,H,M, Pi,Ki,Li i∈S,and four positive diagonal matricesQ1,Q2,Q3, andFdiagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 0 PiBi PiCi

Γ22 0 0 0 0 0 0

Γ33 0 0 0 0 0

Γ44 0 0 0 0

−Q3F 0 0 0

−Ki 0 0

−Li 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0, 3.1

PiλiI, 3.2

HN

j1qijKj, 3.3

MN

j1qijLj, 3.4

where the symbol “” denotes the symmetric term of the matrix:

Γ11−2PiDiiΣ1iUQ1UV Q2V WQ3WN

j1

qijPj, Γ22−GλiΣ2i, Γ33 −Q1KiτH, Γ44−Q2LiτM,

Udiagu1, u2, . . . , un, V diagv1, v2, . . . , vn, Wdiagw1, w2, . . . , wn, uimaxui,ui, vimaxvi,vi, wimaxwi,wi i1,2, . . . , n.

3.5

(7)

Proof. FixingξL2F

t−τ,0;Rnarbitrarily and writingyt;ξ yt, consider the following Lyapulov-Krasovskii functional:

V

t, yt, i 7

k1

Vk

t, yt, i

, 3.6

where

V1

t, yt, i

yTtPiyt, V2

t, yt, i

t

t−τfT

ys

Kif

ys

ds,

V3

t, yt, i

0

−τ t

fT

ys

Hf

ys

ds,

V4

t, yt, i n

j1

fj

0

Rjθ t

t−θh2j yjs

ds dθ,

V5

t, yt, i

t

t−τyTsGysds, V6

t, yt, i

t

t−τgT

ys

Lig

ys

ds,

V7

t, yt, i

0

−τ t

gT

ys

Mg

ys

ds.

3.7

For simplicity, denoteσyt, yt−τ, t, rtbyσt. Then it follows from2.10and2.6that LV1

t, yt, i

2yTtPi

Diyt Aif yt

Big

ytτ

Ci

t

−∞Rtsh

ys

ds

N

j1

qijyTtPjyt trace

σTtPiσt yTt−2PiDiyt 2yTtPiAif

yt

2yTtPiBig

ytτ 2yTtPiCi

t

−∞Rtsh

ys

dsyTtN

j1

qijPjyt trace

σTtPiσt .

3.8

On the other hand, byAssumption 2.2and condition3.2we obtain trace

σTtPiσt

λitrace

σTtσt

λiyT1iyt λiyTt−τΣ2iytτ, 3.9

(8)

which together with3.8gives

LV1

t, yt, i

yTt−2PiDiyt 2yTtPiAif yt

2yTtPiBig

ytτ

2yTtPiCi

t

−∞Rtsh

ys

dsyTtN

j1

qijPjyt

λiyT1iyt λiyTt−τΣ2iytτ.

3.10

Also, from direct computations, it follows that

LV2

t, yt, i fT

yt Kif

yt

fT

ytτ Kif

ytτ

t

t−τfT

ys

N

j1

qijKj

f

ys

ds,

LV3

t, yt, i

τfT yt

Hf yt

t

t−τfT

ys

Hf

ys

ds, LV4

t, yt, i n

j1

fj

0

Rjθh2j yjt

n

j1

fj

0

Rjθh2j

yjt−θ

hT yt

Fh yt

n

j1

fj

0

Rjθdθ

0

Rjθh2j

yjt−θ

hT yt

Fh yt

n

j1

fj

0

Rjθhjyjt−θdθ 2

hT yt

Fh yt

t

−∞Rtsh

ys

ds T

F t

−∞Rtsh

ys

ds,

LV5

t, yt, i

yTtGyt−yTt−τGytτ, LV6

t, yt, i gT

yt Lig

yt

gT

ytτ Lig

ytτ

t

t−τgT

ys

N

j1

qijLj

g

ys

ds,

LV7

t, yt, i

τgT yt

Mg yt

t

t−τgT

ys

Mg

ys

ds.

3.11

(9)

It should be mentioned that the calculation of LV4t, yt, i has applied the following inequality:

0

l1θl2θdθ 2

0

l21θdθ

0

l22θdθ 3.12

withl1θ: Rθ1/2andl2θ: Rθ1/2hyt−θ.

Furthermore, it follows from the conditions3.3and3.4that t

t−τfT

ys

N

j1

qijKj

f

ys

dst

t−τfT

ys

Hf

ys

ds≤0,

t

t−τgT

ys

N

j1

qijLj

g

ys

dst

t−τgT

ys

Mg

ys

ds≤0.

3.13

On the other hand, byAssumption 2.1we have

fT yt

Q1f yt

yTtUQ1Uyt,

gT yt

Q2g yt

yTtV Q2V yt,

hT yt

Q3h yt

yTtWQ3Wyt.

3.14

Hence, by3.8–3.14, we get

LV

t, yt, i

ζTiζt, 3.15

where

ζTt

yTt yTt−τ fT yt

gT yt

hT yt

fTyt−τ gTyt−τ t

−∞Rtsh

ys

ds T

⎦,

(10)

Πi

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 0 PiBi PiCi

Γ22 0 0 0 0 0 0

Γ33 0 0 0 0 0

Γ44 0 0 0 0

−Q3F 0 0 0

−Ki 0 0

−Li 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

, 3.16

Γ11−2PiDiiΣ1iUQ1UV Q2VWQ3WN

j1

qijPj, Γ22−GλiΣ2i, Γ33−Q1KiτH, Γ44−Q2LiτM.

3.17 By condition3.1, there must exist a scalarβi > 0 i ∈ S such thatΠiβiI < 0. Setting β mini∈Sβi, it is clear that β > 0. Taking the mathematical expectation on both sides of 3.15, we obtain

ELV

t, yt, i

Tiζt≤ −βiEyt;ξ2≤ −βEyt;ξ2. 3.18 Applying the Dynkin formula and from3.18, it follows that

EVt, xt, i−EV0, x0, r0 t

0

ELVs, xs, rsds <−β t

0

E|xs|2ds, 3.19

and so t

0

E|xs|2ds <−1

βEVt, xt, i 1

βEV0, x0, r0< 1

βEV0, x0, r0, 3.20 which implies that the equilibrium point of 2.3 or 2.2 equivalently is stochastically asymptotic stability in the mean square. This completes the proof

Remark 3.2. Theorem 3.1provides a sufficient condition for the generalized neural network 2.3to ascertain the stochastic asymptotically stability in the mean square of the equilibrium point. The condition is easy to be verified and can be applied in practice as it can be checked by using recently developed algorithms in solving LMIs.

Remark 3.3. The generalized neural network 2.3 is quite general since it considers the effects of many factors including noise perturbations, Markovian jump parameters, and continuously distributed delays. Furthermore, the constants ui, vi, wi, ui, vi, wi in Assumption 2.1 are allowed to be positive, negative, or zero. To the best of our knowledge,

(11)

the generalized neural network2.3has never been considered in the previous literature.

Hence, the LMI criteria existed in all the previous literature fail in our results.

Remark 3.4. If we takeAi0 i∈S, then system2.3can be written as

dxt

−Drtxt Brtgxtτ Crt t

−∞RtshxsdsV

dt σxt, xtτ, t, rtdwt.

3.21

If we takeBi0i∈S, then system2.3can be written as

dxt

−Drtxt Artfxt Crt t

−∞RtshxsdsV

dt σxt, xtτ, t, rtdwt.

3.22

If we do not consider noise perturbations, then system2.3can be written as

dxt

Drtxt Artfxt

Brtgxt−τ Crt t

−∞RtshxsdsV

dt.

3.23

To the best of our knowledge, even systems3.21–3.23still have not been investigated in the previous literature.

Remark 3.5. We now illustrate that the neural network2.3generalizes some neural networks considered in the earlier literature. For example, if we take

Ris

⎧⎨

0 sτ, 1 s < τ,

τ>0, i1,2, . . . , n, 3.24

then system2.3can be written as

dxt

−Drtxt Artfxt Brtgxt−τ Crt t

t−τhxsdsV

dt σxt, xtτ, t, rtdwt.

3.25

(12)

System3.25was discussed by Liu et al.26and Wang et al.27, although the delays are time-varyings in27. Nevertheless, we point out that system3.25can be generalized to the neural networks with time-varying delays without any difficulty. If we takeCi0 i∈S and do not consider noise perturbations, then system2.3can be written as

dxt !

−Drtxt Artfxt Brtgxtτ V"

dt. 3.26

The stability analysis for system3.26was investigated by Wang et al.20. In15, Lou and Cui also consider system3.26withAi0i∈S.

The next five corollaries follow directly from Theorem 3.1, and so we omit their proofs.

Corollary 3.6. Under Assumptions2.1–2.3, the equilibrium point of 3.21is stochastic asymptot- ically stable, if there exist positive scalarsλi, positive definite matricesG, M,Pi, Li i ∈ S,and three positive diagonal matricesQ2,Q3, andF diagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 0 0 PiBi PiCi

Γ22 0 0 0 0

Γ44 0 0 0

−Q3F 0 0

−Li 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

PiλiI, MN

j1

qijLj,

3.27

where the symbol “” denotes the symmetric term of the matrix:

Γ11 −2PiDiiΣ1iV Q2V WQ3WN

j1

qijPj,

Γ22−GλiΣ2i, Γ44 −Q2LiτM, V diagv1, v2, . . . , vn, W diagw1, w2, . . . , wn, vimaxvi,vi, wi maxwi,wi i1,2, . . . , n.

3.28

Corollary 3.7. Under Assumptions2.1–2.3, the equilibrium point of3.22is stochastic asymptoti- cally stable, if there exist positive scalarsλi, positive definite matricesG,H,Pi,Ki i∈S,and three

(13)

positive diagonal matricesQ1,Q3, andFdiagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 PiCi

Γ22 0 0 0 0

Γ33 0 0 0

−Q3F 0 0

−Ki 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

PiλiI, HN

j1

qijKj,

3.29

where the symbol “” denotes the symmetric term of the matrix:

Γ11 −2PiDiiΣ1iUQ1UWQ3WN

j1

qijPj, Γ22 −GλiΣ2i, Γ33−Q1KiτH, Udiagu1, u2, . . . , un, W diagw1, w2, . . . , wn, uimaxui,ui, wi maxwi,wi i1,2, . . . , n.

3.30

Corollary 3.8. Under Assumptions2.1–2.3, the equilibrium point of3.23is stochastic asymptoti- cally stable, if there exist positive scalarsλi, positive definite matricesG,H,M,Pi,Ki,Lii∈S,and three positive diagonal matricesQ1,Q2, andF diagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 0 PiBi PiCi

−G 0 0 0 0 0 0

Γ33 0 0 0 0 0

Γ44 0 0 0 0

−Q3F 0 0 0

−Ki 0 0

−Li 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

HN

j1

qijKj,

MN

j1

qijLj,

3.31

(14)

where the symbol “” denotes the symmetric term of the matrix:

Γ11 −2PiDiGUQ1UV Q2V WQ3WN

j1

qijPj, Γ33−Q1KiτH, Γ44−Q2LiτM,

Udiagu1, u2, . . . , un, V diagv1, v2, . . . , vn, W diagw1, w2, . . . , wn, uimaxui,ui, vimaxvi,vi, wi maxwi,wi i1,2, . . . , n.

3.32

Corollary 3.9. Under Assumptions2.1–2.3, the equilibrium point of 3.25is stochastic asymptot- ically stable, if there exist positive scalarsλi, positive definite matricesG,H,M,Pi,Ki,Li i∈S, and four positive diagonal matricesQ1,Q2,Q3, andFdiagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 0 PiBi PiCi

Γ22 0 0 0 0 0 0

Γ33 0 0 0 0 0

Γ44 0 0 0 0

−Q3F 0 0 0

−Ki 0 0

−Li 0

−F

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

PiλiI, HN

j1

qijKj,

MN

j1

qijLj,

3.33

where the symbol “” denotes the symmetric term of the matrix:

Γ11−2PiDiiΣ1iUQ1UV Q2VWQ3WN

j1

qijPj, Γ22−GλiΣ2i, Γ33 −Q1KiτH, Γ44−Q2LiτM,

Udiagu1, u2, . . . , un, V diagv1, v2, . . . , vn, W diagw1, w2, . . . , wn, uimaxui,ui, vimaxvi,vi, wi maxwi,wi i1,2, . . . , n.

3.34

(15)

Corollary 3.10. UnderAssumption 2.1, the equilibrium point of 3.26is stochastic asymptotically stable, if there exist positive scalarsλi, positive definite matricesG,H,M,Pi,Ki,Li i∈ S,and three positive diagonal matricesQ1,Q2, andF diagf1, f2, . . . , fnsuch that the following LMIs hold:

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎢

⎢⎣

Γ11 0 PiAi 0 0 PiBi

−G 0 0 0 0

Γ33 0 0 0

Γ44 0 0

−Ki 0

−Li

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎥

⎥⎦

<0,

HN

j1

qijKj,

MN

j1

qijLj,

3.35

where the symbol “” denotes the symmetric term of the matrix:

Γ11 −2PiDiGUQ1UV Q2V N

j1

qijPj,

Γ33−Q1KiτH, Γ44−Q2LiτM, Udiagu1, u2, . . . , un, V diagv1, v2, . . . , vn, uimaxui,ui, vimaxvi,vi i1,2, . . . , n.

3.36

Remark 3.11. As discussed inRemark 3.4, Corollaries3.6–3.8are “new” since they have never been considered in the previous literature.Corollary 3.9was discussed by Liu et al.26and Wang et al.27, although the delays are time-varyings in27. Nevertheless, we point out that our results can be generalized to the neural networks with time-varying delays without any difficulty.Corollary 3.10has been discussed by Wang et al.20and Lou and Cui15, but our conditions are weaker than those in 15, 20, for the constants ui, vi, ui, vi in Corollary 3.10are allowed to be positive, negative, or zero.

4. Illustrative Example

In this section, a numerical example is given to illustrate the effectiveness of the obtained results.

(16)

Example 4.1. Consider a two-dimensional stochastic neural network with both Markov jump parameters and continuously distributed delays:

dxt

Drtxt Artfxt Brtgxt−τ

Crt t

−∞e−t−shxsdsV

dtσxt, xtτ, t, rtdwt,

4.1

wherext x1t, x2tT,V 0,0T,wtis a two dimensional Brownian motion, andrt is a right-continuous Markov chain taking values inS{1,2}with generator

Q −8 8

5 −5

. 4.2

Let

fixi 0.01 tan hxi 0.01exie−xi

exie−xi , gixi hixi 0.005|xi1| − |xi−1| i1,2, 4.3

then system4.1satisfiesAssumption 2.1withU V W −0.01I, U V W 0.01I. Take

σxt, xtτ, t, i

0.2xit 0.4xit−τ 0.4xit 0.2xit−τ

i1,2, 4.4

then system4.1satisfies Assumptions2.2and2.3withΣ1i Σ2i0.2I i1,2.

Other parameters of the network4.1are given as follows:

D1 diag0.3,0.1, A1

0.2 −0.2 0.4 0.1

, B1

0.1 −0.2 0.3 0.2

, C1

0.1 0.2

−0.3 0.3

,

D2 diag0.4,0.3, A2

0.3 0.2

−0.1 0.2

, B2

0.2 0.1

−0.2 0.1

, C2

0.2 0.2

−0.3 0.1

. 4.5

(17)

Here we letτ 2. By using the Matlab LMI toolbox, we can obtain the following feasible solution for the LMIs3.1–3.4:

G

18.71 1.41 1.41 4.68

, H

39.69 −6.32

−6.32 44.13

,

M

29.85 −1.07

−1.07 33.32

, E diag55.09,55.09,

Q1

224.65 0 0 224.65

, Q2

163.84 0 0 163.84

,

Q3

72.69 0 0 72.69

, F diag31.88,31.88,

P1

57.77 1.91 1.91 39.25

, P2

56.59 1.66 1.66 37.41

, K1

46.70 −4.19

−4.19 49.63

, λ183.25,

K2

46.40 −4.17

−4.17 49.33

, L1

64.95 3.06 3.06 55.17

, L2

64.41 2.99 2.99 54.64

, λ279.06.

4.6 Therefore, it follows from Theorem 3.1 that the network 4.1 is stochastic asymptotically stable.

By using the Euler-Maruyama numerical scheme, simulation results are as follows:

T 50 and step sizeδt 0.02.Figure 1is the state response of model 1i.e., the network 4.1whenrt 1with the initial condition0.5,−0.7T, for−2≤t ≤0, andFigure 2is the state response of model 2 i.e., the network4.1whenrt 2 with the initial condition

−0.6,0.4T, for−2≤t≤0.

Remark 4.2. As discussed in Remarks 3.2–3.11, the LMI criteria existed in all the previous literature e.g., Liu et al. 26, Wang et al. 20, 27, Lou and Cui 15, etc. fail in Example 4.1since many factors including noise perturbations, Markovian jump parameters, and continuously distributed delays are considered inExample 4.1.

5. Concluding Remarks

In this paper we have investigated the stochastic stability analysis problem for a class of neural networks with both Markovian jump parameters and continuously distributed delays.

It is worth mentioning that our obtained stability condition is delay-dependent, which is less conservative than delay-independent criteria when the delay is small. Furthermore, the obtained stability criteria in this paper are expressed in terms of LMIs, which can be solved easily by recently developed algorithms. A numerical example is given to show the less

(18)

1

−0.8

−0.6

0.4

0.2 0 0.2 0.4 0.6 0.8

0 10 20 30 40 50 60

t x1

x2

Figure 1: The state response of the model 1 inExample 4.1.

0.8

0.6

−0.4

−0.2 0 0.2 0.4 0.6

0 10 20 30 40 50 60

t x1

x2

Figure 2: The state response of the model 2 inExample 4.1.

conservatism and effectiveness of our results. The results obtained in this paper improve and generalize those given in the previous literature. On the other hand, it should be noted that the explicit rate of convergence for the considered system is not given in this paper since it is difficult to deal with continuously distributed delays. Therefore, investigating the explicit rate of convergence for the considered system remains an open issue. Finally, we point out that it is possible to generalize our results to a class of neural networks with uncertainties.

Research on this topic is in progress.

参照

関連したドキュメント

As an application of this result, the asymptotic stability of stochastic numerical methods, such as partially drift-implicit θ-methods with variable step sizes for ordinary

By employing the theory of topological degree, M -matrix and Lypunov functional, We have obtained some sufficient con- ditions ensuring the existence, uniqueness and global

Based on the models of urban density, two kinds of fractal dimensions of urban form can be evaluated with the scaling relations between the wave number and the spectral density.. One

In this paper, we consider the stability of parabolic Harnack inequalities for symmetric non-local Dirichlet forms (or equivalent, symmetric jump processes) on metric measure

Secondly, we establish some existence- uniqueness theorems and present sufficient conditions ensuring the H 0 -stability of mild solutions for a class of parabolic stochastic

36 investigated the problem of delay-dependent robust stability and H∞ filtering design for a class of uncertain continuous-time nonlinear systems with time-varying state

Li, “Simplified exponential stability analysis for recurrent neural networks with discrete and distributed time-varying delays,” Applied Mathematics and Computation, vol..

Moreover, in 9, 20, the authors studied the problem of the robust stability of neutral systems with nonlinear parameter perturbations and mixed time-varying neutral and discrete