• 検索結果がありません。

# QML Estimators in Linear Regression Models with Functional Coefficient Autoregressive Processes

N/A
N/A
Protected

シェア "QML Estimators in Linear Regression Models with Functional Coefficient Autoregressive Processes"

Copied!
30
0
0

(1)

Volume 2010, Article ID 956907,30pages doi:10.1155/2010/956907

## QML Estimators in Linear Regression Models withFunctional Coefficient Autoregressive Processes

### Hongchang Hu

School of Mathematics and Statistics, Hubei Normal University, Huangshi 435002, China

Correspondence should be addressed to Hongchang Hu,retutome@163.com Received 30 December 2009; Revised 19 March 2010; Accepted 6 April 2010 Academic Editor: Massimo Scalia

Copyrightq2010 Hongchang Hu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

This paper studies a linear regression model, whose errors are functional coeﬃcient autoregressive processes. Firstly, the quasi-maximum likelihoodQMLestimators of some unknown parameters are given. Secondly, under general conditions, the asymptotic propertiesexistence, consistency, and asymptotic distributionsof the QML estimators are investigated. These results extend those of Maller2003, White1959, Brockwell and Davis1987, and so on. Lastly, the validity and feasibility of the method are illuminated by a simulation example and a real example.

### 1. Introduction

Consider the following linear regression model:

ytxTtβεt, t1,2, . . . , n, 1.1

whereyt’s are scalar response variables,xt’s are explanatory variables,βis ad-dimensional unknown parameter, and theεt’s are functional coeﬃcient autoregressive processes given as ε1η1, εtftθεt−1ηt, t2,3, . . . , n, 1.2

whereηt’s are independent and identically distributed random errors with zero mean and finite variance σ2, θ is a one-dimensional unknown parameter and ftθ is a real valued function defined on a compact setΘwhich contains the true valueθ0as an inner point and is a subset ofR1. The values ofθ0andσ2are unknown.

(2)

Model1.1includes many special cases, such as an ordinary linear regression model whenftθ≡0; see1–11 . In the sequel, we always assume thatftθ/0,for someθ∈Θ, is a linear regression model with constant coeﬃcient autoregressive processeswhen ftθ θ, see Maller 12 , Pere 13 , and Fuller 14 , time-dependent and functional coeﬃcient autoregressive processes when β 0, see Kwoun and Yajima 15 , constant coeﬃcient autoregressive processes when ftθ θand β 0, see White 16, 17 , Hamilton 18 , Brockwell and Davis 19 , and Abadir and Lucas 20 , time-dependent or time-varying autoregressive processeswhenftθ atandβ 0, see Carsoule and Franses21 , Azrak and M´elard22 , and Dahlhaus23 , and so forth.

Regression analysis is one of the most mature and widely applied branches of statistics. Linear regression analysis is one of the most widely used statistical techniques.

Its applications occur in almost every field, including engineering, economics, the physical sciences, management, life and biological sciences, and the social sciences. Linear regression model is the most important and popular model in the statistical literature, which attracts many statisticians to estimate the coeﬃcients of the regression model. For the ordinary linear regression modelwhen the errors are independent and identically distributed random variables, Bai and Guo 1 , Chen 2 , Anderson and Taylor 3 , Drygas 4 , Gonz´alez- Rodr´ıguez et al.5 , Hampel et al.6 , He7 , Cui8 , Durbin9 , Hoerl and Kennard10 , Li and Yang11 , and Zhang et al.24 used various estimation methodsLeast squares estimate method, robust estimation, biased estimation, and Bayes estimationto obtain estimators of the unknown parameters in1.1and discussed some large or small sample properties of these estimators.

However, the independence assumption for the errors is not always appropriate in applications, especially for sequentially collected economic and physical data, which often exhibit evident dependence on the errors. Recently, linear regression with serially correlated errors has attracted increasing attention from statisticians. One case of considerable interest is that the errors are autoregressive processes and the asymptotic theory of this estimator was developed by Hannan and Kavalieris 25 . Fox and Taqqu26 established its asymptotic normality in the case of long-memory stationary Gaussian observations errors. Giraitis and Surgailis 27 extended this result to non-Gaussian linear sequences. The asymptotic distribution of the maximum likelihood estimator was studied by Giraitis and Koul in 28 and Koul in29 when the errors are nonlinear instantaneous functions of a Gaussian long-memory sequence. Koul and Surgailis 30 established the asymptotic normality of the Whittle estimator in linear regression models with non-Gaussian long-memory moving average errors. When the errors are Gaussian, or a function of Gaussian random variables that are strictly stationary and long range dependent, Koul and Mukherjee31 investigated the linear model. Shiohama and Taniguchi32 estimated the regression parameters in a linear regression model with autoregressive process.

In addition toconstant or functional or random coeﬃcientautoregressive model, it has gained much attention and has been applied to many fields, such as economics, physics, geography, geology, biology, and agriculture. Fan and Yao 33 , Berk 34 , Hannan and Kavalieris35 , Goldenshluger and Zeevi36 , Liebscher37 , An et al.38 , Elsebach39 , Carsoule and Franses 21 , Baran et al.40 , Distaso 41 , and Harvill and Ray 42 used various estimation methodsthe least squares method, the Yule-Walker method, the method of stochastic approximation, and robust estimation methodto obtain some estimators and discussed their asymptotic properties, or investigated hypotheses testing.

This paper discusses the model 1.1-1.2 including stationary and explosive processes. The organization of the paper is as follows. InSection 2some estimators ofβ, θ,

(3)

and σ2 are given by the quasi-maximum likelihood method. Under general conditions, the existence and consistency the quasi-maximum likelihood estimators are investigated, and asymptotic normality as well, inSection 3. Some preliminary lemmas are presented in Section 4. The main proofs are presented inSection 5, with some examples inSection 6.

### 2. Estimation Method

Write the “true” model as

ytxTtβ0et, t1,2, . . . , n, 2.1 e1 η1, etftθ0et−1ηt, t2,3, . . . , n, 2.2

whereftθ0 dftθ/dθ|θθ0/0, andηt’s are i.i.d errors with zero mean and finite variance σ02. Define−1

i0ft−iθ0 1, and by2.2we have

ett−1

j0

j−1

i0

ft−iθ0

ηt−j. 2.3

Thusetis measurable with respect to theσ-fieldHgenerated byη1, η2, . . . , ηt, and

Eet0, Varet σ02 t−1

j0

j−1

i0

ft−i2 θ0

. 2.4

Assume at first that theηt’s are i.i.d.N0, σ2. Using similar arguments to those of Fuller14 or Maller12 , we get the log-likelihood ofy2, y3, . . . , ynconditional ony1:

Ψn

β, θ, σ2

logLn−1

2n−1logσ2− 1 2σ2

n t2

εtftθεt−12

− 1

2n−1log2π. 2.5

At this stage we drop the normality assumption, but still maximize 2.5 to obtain QML estimators, denoted byσn2, βn, θnwhen they exist:

∂Ψn

∂σ2n−1 2σ2 1

4 n

t2

εtftθεt−12

, 2.6

∂Ψn

∂θ 1 σ2

n t2

ftθ εtftθεt−1

εt−1, 2.7

∂Ψn

∂β 1 σ2

n t2

εtftθεt−1 xtftθxt−1

. 2.8

(4)

Thusσn2, βn, θnsatisfy the following estimation equations:

σn2 1

n−1 n

t2

εtft

θn

εt−12

, 2.9

n t2

εtft

θn

εt−1 ft

θn

εt−10, 2.10

n t2

εtft

θn

εt−1 xtft

θn

xt−1

0, 2.11

where

εtytxTtβn. 2.12

Remark 2.1. If ftθ θ, then the above equations become the same as Maller’s 12 . Therefore, we extend the QML estimators of Maller12 .

To calculate the values of the QML estimators, we may use the grid search method, steepest ascent method, Newton-Raphson method, and modified Newton-Raphson method.

In order to calculate inSection 6, we introduce the most popular modified Newton-Raphson method proposed by Davidon-Fletcher-Powellsee Hamilton18 .

Letd2×1 vector→−

θm σm2, βm, θmdenote an estimator of→−

θ σ2, β, θthat has been calculated at themth iteration, and letAm denote an estimation of H→−

θm −1. The new estimator→−

θm1is given by

θm1→−

θmsAmg →−

θm

2.13

forsthe positive scalar that maximizesΨn{→−

θmsAmg→−

θm},whered2×1 vector

g →−

θm

∂Ψn

→− θ

→− θ |

θ θm

⎜⎜

⎜⎜

⎜⎜

∂Ψn

∂σ2|σ2σm2

∂Ψn

∂β |ββm

∂Ψn

∂θ |θθm

⎟⎟

⎟⎟

⎟⎟

2.14

andd2×d2symmetric matrix

H →−

θm

2Ψn

→− θ

→− θ ∂→−

θT

| θ

θm

⎜⎜

⎜⎜

⎜⎜

⎜⎝

2Ψn

∂σ22

2Ψn

∂σ2∂β

2Ψn

∂σ2∂θ

2Ψn

∂β∂βT

2Ψn

∂β∂θ

∗ ∗ 2Ψn

∂θ2

⎟⎟

⎟⎟

⎟⎟

⎟⎠

| θ

θm , 2.15

(5)

where

2Ψn

∂σ22 n−1 2σ4 − 1

σ6 n

t2

εtftθεt−12 ,

2Ψn

∂σ2∂β − 1 σ4

n t2

εtftθεt−1 xtftθxt−1T ,

2Ψn

∂σ2∂θ − 1 σ4

n t2

εtftθεt−1 ftθ,

2.16

2Ψn

∂β∂βT − 1 σ2

n t2

xtftθxt−1 xtftθxt−1T

, 2.17

2Ψn

∂β∂θ − 1 σ2

n t2

ftθεt−1xtftθεtxt−1−2ftθftθxt−1εt−1 ,

2Ψn

∂θ2 − 1 σ2

n t2

ft2θ ftθftθ

ε2t−1ftθεtεt−1 .

2.18

Once→−

θm1have been calculated, a new estimationAm1 is found from

Am1AmAm Δgm1 Δgm1T

Am Δgm1T

Am Δgm1

Δ→− θm1

Δ→−

θm1 T

Δgm1T Δ→−

θm1

, 2.19

where

Δ→−

θm1→−

θm1−→−

θm, Δgm1g →−

θm1

g →−

θm

. 2.20

It is well known that least squares estimators in ordinary linear regression model are very good estimators, so a recursive algorithms procedure is to start the iteration with β0, σ02 which are least squares estimators ofβ and σ2, respectively. Take θ0 such that ftθ0 0. Iterations are stopped if some termination criterion is reached, for example, if

→−

θm1−→− θm

→− θm

< δ, 2.21

for some prechosen small numberδ >0.

(6)

Up to this point, we obtain the values of QML estimators when the functionftθ ft, θis known. However, the functionftθis never the case in practice; we have to estimate it. By2.12and1.2, we obtain

f t,θn

εt

εt−1, t2,3, . . . , n. 2.22

Based on the dataset{ft, θn, t2,3, . . . , n}, we may obtain the estimation functionft,θn of ft, θ by some smoothing methods see Simonﬀ 43 , Fan and Yao 33 , Green and Silverman44 , Fan and Gijbels45 , etc.

To obtain our results, the following conditions are suﬃcient.

A1Xnn

t2xtxTt is positive definite for suﬃciently largenand

n→ ∞lim max

1≤t≤nxTtXn−1xt0, 2.23

lim sup

n→ ∞ |λ|max

Xn−1/2ZnXn−T/2

<1, 2.24

whereZn 1/2n

t2xtxTt−1xt−1xTtand|λ|max·denotes the maximum in absolute value of the eigenvalues of a symmetric matrix.

A2There is a constantα >0 such that

t j1

j−1

i0

ft−i2 θ

α 2.25

for anyt∈ {1,2, . . . , n}andθ∈Θ.

A3The derivativesftθ dftθ/dθ, ftθ dftθ/dθexist and are bounded for anytandθ∈Θ.

Remark 2.2. Maller 12 applied the condition A1, and Kwoun and Yajima15 used the conditionsA2andA3. Thus our conditions are general.A1delineates the class ofxtfor which our results hold in the sense required. It is further discussed by Maller in12 . Kwoun and Yajima15 call{et}stable if Varetis bounded. ThusA2implies that{et}is stable.

However,{et}is not stationary. In fact, by2.3, we obtain that

Covet, etk σ02 k−1

i0

ftk−iθ0 ftθ0k

i0

ftk−iθ0 · · ·t−2

l0

ft−lθ0tk−2

i0

ftk−iθ0

, 2.26

which is dependent oft.

(7)

For ease of exposition, we will introduce the following notations which will be used later in the paper.

Defined1-vectorϕ β, θ, and

Sn ϕ

σ2∂Ψn

∂ϕ σ2 ∂Ψn

∂β ,∂Ψn

∂θ

, Fn ϕ

−σ2 2Ψn

∂ϕ∂ϕT. 2.27

By2.7and2.8, we get

Fn ϕ

⎜⎜

⎜⎝

Xnθ n

t2

ftθεt−1xtftθεtxt−1−2ftθftθxt−1εt−1

n

t2

ft2θ ftθftθ

εt−12ftθεtεt−1

⎟⎟

⎟⎠, 2.28

whereXnθ −σ22Ψn/∂β∂βTand the∗indicates that the element is filled in by symmetry.

Thus,

DnE Fn ϕ0

⎜⎜

Xnθ0 0

n

t2

ft2θ0 ftθ0ftθ0

Eet−12ftθ0Eetet−1

⎟⎟

⎜⎜

Xnθ0 0

n

t2

ft2θ0Eet−12

⎟⎟

Xnθ0 0

∗ Δnθ0, σ0

,

2.29

where

Δnθ0, σ0

n t2

ft2θ0Ee2t−1 σ02 n

t2

ft2θ0t−2

j0

j−1

i0

ft−i2 θ

On. 2.30

### 3. Statement of Main Results

Theorem 3.1. Suppose that conditions (A1)–(A3) hold. Then there is a sequenceAn0 such that, for eachA >0, asn → ∞, the probability

P

there are estimatorsϕnn2 withSn ϕn

0, and ϕnn2

NnA

−→1. 3.1

(8)

Furthermore,

ϕnn2

−→p

ϕ0, σ02

, n−→ ∞, 3.2

where, for eachn1,2, . . . , A >0 andAn∈0, σ02; define neighborhoods NnA

ϕRd1: ϕϕ0

T

Dn ϕϕ0

A2 , NnA NnA∩

σ2

σ02An, σ20An

.

3.3

Theorem 3.2. Suppose that conditions (A1)–(A3) hold. Then 1

σnFnT/2 ϕn ϕnϕ0

−→DN0, Id1, n−→ ∞. 3.4

Remark 3.3. ForθRm, mN, our results still hold.

In the following, we will investigate some special cases in the model 1.1-1.2.

Although the following results are directly obtained from Theorems3.1and3.2, we discuss these results in order to compare with the corresponding results.

Corollary 3.4. Letftθ θ. If condition (A1) holds, then, for|θ|/1,3.1,3.2, and3.4hold.

Remark 3.5. These results are the same as the corresponding results of Maller12 . Corollary 3.6. Ifβ0 andftθ θ, then, for|θ| /1,

n

t2ε2t−1 σn

θnθ0

−→DN0,1, n−→ ∞, 3.5

where

σ2n 1

n−1 n

t2

εtθnεt−12

, θn n

t2εtεt−1 n

t2ε2t−1 . 3.6

Remark 3.7. These estimators are the same as the least squares estimatorssee White16 . For|θ|>1,{εt}are explosive processes. In the case, the corollary is the same as the results of White17 . While|θ|<1, notice thatσn2pσ02and1/n−1n

t2ε2t−1pt2σ02/1θ20, and byCorollary 3.6we obtain

n

θnθ0

−→DN

0,1−θ02

. 3.7

The result was discussed by many authors, such as Fujikoshi and Ochi46 and Brockwell and Davis19 .

(9)

Corollary 3.8. Letβ0. If conditions (A2) and (A3) hold, then F1/2n

θn

σn

θnθ0

−→DN0,1, n−→ ∞, 3.8

where

Fn θn

n

t2

ft2

θn ft

θn ft

θn

ε2t−1ft θn

εtεt−1 ,

σn2 1

n−1 n

t2

εtft

θn εt−12

.

3.9

Corollary 3.9. Letftθ at. If condition (A1) holds, then

1 σn

n

t2

xtatxt−1xtatxt−1T T/2

βnβ0

−→DN0, Id, n−→ ∞. 3.10

Remark 3.10. Let at 0. Note that n

t2xtxtT On and σn2pσ02; we easily obtain asymptotic normality of the quasi-maximum likelihood or least squares estimator in ordinary linear regression models from the corollary.

### 4. Some Lemmas

To prove Theorems3.1and3.2, we first introduce the following lemmas.

Lemma 4.1. The matrix Dn is positive definite for large enough n with ESnϕ0 0 and VarSnϕ0 σ02Dn.

Proof. It is easy to show that the matrixDnis positive definite for large enoughn. By2.8, we have

σ02E ∂Ψn

∂β |ββ0

n

t2

E etftθ0et−1 xtftθ0xt−1

n

t2

xtftθ0xt−1

t0.

4.1

Note thatet−1andηtare independent of each other; thus by2.7andt0, we have

σ02E ∂Ψn

∂θ |θθ0

n

t2

E

etftθ0et−1

ftθ0et−1

n

t2

ftθ0E ηtet−1 0.

4.2

(10)

Hence, from4.1and4.2,

E Sn ϕ0

σ02E ∂Ψn

∂β |ββ0,∂Ψn

∂θ |θθ0

0. 4.3

By2.8and2.17, we have

Var

σ02∂Ψn

∂β |ββ0

Var n

t2

etftθ0et−1 xtftθ0xt−1

Var n

t2

ηt xtftθ0xt−1

σ02Xnθ0.

4.4

Note that{ftθ0ηtet−1, Ht}is a martingale diﬀerence sequence with Var ftθ0ηtet−1

ft2θ02tEe2t−1σ02ft2θ0Ee2t−1, 4.5

so

Var

σ02∂Ψn

∂θ |θθ0

Var n

t2

ηtftθ0et−1

n

t2

ft2θ0Ee2t−1σ02Δnθ0, σ0.

4.6

By2.7and2.8and noting thatet−1andηtare independent of each other, we have

Cov

σ02∂Ψn

∂β |ββ0, σ02∂Ψn

∂θ |θθ0

E

σ02∂Ψn

∂β |ββ0, σ02∂Ψn

∂θ |θθ0

E n

t2

ηt2 xtftθ0xt−1

ftθ0et−1

E n

t3

ηt xtftθ0xt−1t−1

s2

ηsfsθ0es−1

E n

s3

ηsfsθ0es−1

s−1 t2

ηt xtftθ0xt−1 0.

4.7

From4.4–4.7, it follows that VarSnϕ0 σ02Dn.

(11)

Lemma 4.2. If condition (A1) holds, then, for anyθ ∈ Θ, the matrixXnθis positive definite for large enoughn, and

nlim→ ∞max

1≤t≤nxTtX−1n θxt0. 4.8

Proof. Letλ1andλdbe the smallest and largest roots of|ZnλXn|0. Then from the study of Rao in47, Ex 22.1 ,

λ1uTZnu

uTXnuλd 4.9

for unit vectors u. Thus by 2.24, there are some δ ∈ max{0,1 − 1 min2≤t≤n| ft2θ|/max2≤t≤n|ftθ|},1andn0δsuch thatnN0implies that

uTZnu≤1−δuTXnu. 4.10

By4.10, we have

uTXnun

t2

uT xtftθxt−12

n

t2

uTxt2

ft2θ

uTxt−12

ftθuTxt−1xTtuftθuTxtxTt−1u

n

t2

uTxt2

min

2≤t≤n

ft2θn

t2

uTxt−12

−max

2≤t≤nftθuTZnu

uTXnumin

2≤t≤n

ft2θuTXnu−max

2≤t≤nftθuTZnu

1min

2≤t≤n

ft2θ−max

2≤t≤nftθ1−δ

uTXnu Cθ, δuTXnu.

4.11

By the study of Rao in47, page 60 and2.23, we have

uTxt

2

uTXnu −→0. 4.12

(12)

From4.12andCθ, δ>0,

xTtXn−1θ sup

u

uTxt

2

uTXnθu

≤sup

u

uTxt

2

Cθ, δuTXnu

−→0. 4.13

Lemma 4.3see48 . LetWnbe a symmetric random matrix with eigenvaluesλjn,1 ≤jd.

Then

Wn−→pI⇐⇒λjn−→p1, n−→ ∞. 4.14

Lemma 4.4. For eachA >0,

sup

ϕ∈NnA

D−1/2n Fn ϕ

Dn−T/2−Φn−→p0, n−→ ∞, 4.15

and also

Φn−→DΦ, 4.16

c→lim0lim sup

A→ ∞ lim sup

n→ ∞ P

ϕ∈NinfnAλmin

D−1/2n Fn ϕ Dn−T/2

c 0, 4.17

where

Φn

⎜⎝

Id 0

0 n

t2ft2θ0e2t−1 Δnθ0, σ0

⎟⎠, Φ Id1. 4.18

Proof. LetXnθ0 X1/2n θ0XnT/2θ0be a square root decomposition ofXnθ0. Then

Dn

Xn1/2θ0 0

∗ !

Δnθ0, σ0

XnT/2θ0 0

∗ !

Δnθ0, σ0

D1/2n DT/2n . 4.19

LetϕNnA. Then

ϕϕ0

T

Dn ϕϕ0

ββ0

T

Xnθ0 ββ0

θθ02Δnθ0, σ0A2. 4.20

(13)

From2.28,2.29, and4.18,

Dn−1/2Fn ϕ

D−T/2n −Φn

⎜⎜

⎜⎜

⎜⎝

X−1/2n θ0XnθXn−T/2θ0Id Xn−1/2θ0n

t2 ftθεt−1xtftθεtxt−1−2ftθftθεt−1xt−1

nθ0, σ0

n t2

ft2θ ftθftθ

ε2t−1ftθεtεt−1

n

t2 ft2θ0e2t−1

nθ0, σ0

⎟⎟

⎟⎟

⎟⎠. 4.21

Let

NβnA

β: ββ0

T

X1/2n θ02A2 , 4.22 NnθA

θ:|θ−θ0| ≤ A

nθ0, σ0

. 4.23

In the first step, we will show that, for eachA >0,

sup

θ∈NθnA

X−1/2n θ0XnθXn−T/2θ0Id−→0, n−→ ∞. 4.24

In fact, note that

X−1/2n θ0XnθXn−T/2θ0IdXn−1/2θ0Xnθ−Xnθ0Xn−T/2θ0

Xn−1/2θ0T1T2T3Xn−T/2θ0, 4.25

where

T1n

t2

ftθ0ftθ

xt−1 xtftθ0xt−1T

,

T2n

t2

xtftθ0xt−1 xTt−1,

T3n

t2

ftθ0ftθ2

xt−1xTt−1.

4.26

(14)

Letu, vRd, |u| |v| 1, and letuTn uTXn−1/2θ0, vnT Xn−T/2θ0v. By Cauchy- Schwartz inequality,Lemma 4.2, conditionA3, and noting thatθNnθA, we have that

uTnT1vn

n t2

ftθ0ftθ

uTnxt−1 xtftθ0xt−1T

vn

≤max

2≤t≤nftθ0ftθ

n t2

uTnxt−1 xtftθ0xt−1T

vn

≤max

2≤t≤nftθ0ftθn

t2

uTnxt−1xTt−1un

1/2

· n

t2

vnT xtftθ0xt−1 xtftθ0xt−1T vn

1/2

≤max

2≤t≤nftθ0ftθn

t2

uTnxtxTtun 1/2

≤max

2≤t≤n

ft

θ0θ| ·nmax

1≤t≤n

xtTX−1n θ0xt

C

"

n

Δnθ0, σ0o1−→0.

4.27

Hereθaθ 1−0for some 0≤a≤1. Similar to the proof ofT1, we easily obtain that uTnT2vn−→0. 4.28

By Cauchy-Schwartz inequality,Lemma 4.2, conditionA3, and noting thatNnθA, we have that

uTnT3vn uTn

n t2

ftθ0ftθ2

xt−1xt−1T vn

≤max

2≤t≤nftθ0ftθ2 n

t2

uTnxtxTtun n

t2

vTnxtxTtvn 1/2

nmax

2≤t≤n

ft

θ20θ|2max

1≤t≤n

xTtXn−1θ0xt

nA2

Δnθ0, σ0o1−→0.

4.29

Hence,4.24follows from4.25–4.29.

(15)

In the second step, we will show that Xn−1/2θ0n

t2 ftθεt−1xtftθεtxt−1−2ftθftθεt−1xt−1

nθ0, σ0 −→p0. 4.30

Note that

εtytxtTβxtT β0β et, εtftθ0εt−1 xtftθ0xt−1T

β0β ηt.

4.31

Consider

J n

t2

ftθεt−1xtftθεtxt−1−2ftθftθεt−1xt−1

n

t2

εt−1ftθ xtftθ0xt−1

ftθ εtftθ0εt−1 xt−1 2ftθ ftθ0ftθ

εt−1xt−1 T1T2T3T42T52T6,

4.32

where

T1n

t2

xTt−1ftθ β0β xtftθ0xt−1

, T2n

t2

ftθet−1 xtftθ0xt−1 ,

T3n

t2

ftθ xtftθ0xt−1T

β0β

xt−1, T4n

t2

ftθηtxt−1,

T5n

t2

ftθ ftθ0ftθ

xTt−1 β0β

xt−1, T6n

t2

ftθ ftθ0ftθ

et−1xt−1. 4.33

ForβNnβAand eachA >0, we have β0βT

xt2 β0βT

X1/2n θ0Xn−1/2θ0xtxtTX−T/2n θ0XnT/2θ0 β0β

≤max

1≤t≤n

xTtXn−1θ0xt β0βT

Xnθ0 β0β

A2max

1≤t≤n

xTtXn−1θ0xt

.

4.34

A variety of powerful methods, such as the inverse scattering method [1, 13], bilinear transforma- tion [7], tanh-sech method [10, 11], extended tanh method [5, 10], homogeneous

This paper is devoted to the study of maximum principles holding for some nonlocal diffusion operators defined in (half-) bounded domains and its applications to obtain

To derive a weak formulation of (1.1)–(1.8), we ﬁrst assume that the functions v, p, θ and c are a classical solution of our problem. 33]) and substitute the Neumann boundary

The technique involves es- timating the flow variogram for ‘short’ time intervals and then estimating the flow mean of a particular product characteristic over a given time using

In this paper, we employ the homotopy analysis method to obtain the solutions of the Korteweg-de Vries KdV and Burgers equations so as to provide us a new analytic approach

The Dubrovin–Novikov procedure is well justified in the averaging of the Gardner–Zakharov–Faddeev bracket on the m-phase solutions of KdV for any m and provides a local

As in [6], we also used an iterative algorithm based on surrogate functionals for the minimization of the Tikhonov functional with the sparsity penalty, and proved the convergence

[Mag3] , Painlev´ e-type differential equations for the recurrence coefficients of semi- classical orthogonal polynomials, J. Zaslavsky , Asymptotic expansions of ratios of