• 検索結果がありません。

Brownian motion in random scenery

N/A
N/A
Protected

Academic year: 2022

シェア "Brownian motion in random scenery"

Copied!
20
0
0

読み込み中.... (全文を見る)

全文

(1)

El e c t ro nic J

o f

Pr

ob a bi l i t y

Electron. J. Probab.19(2014), no. 1, 1–19.

ISSN:1083-6489 DOI:10.1214/EJP.v19-2894

An invariance principle for

Brownian motion in random scenery

Yu Gu

Guillaume Bal

Abstract

We prove an invariance principle for Brownian motion in Gaussian or Poissonian ran- dom scenery by the method of characteristic functions. Annealed asymptotic limits are derived in all dimensions, with a focus on the case of dimensiond= 2, which is the main new contribution of the paper.

Keywords:weak convergence; random media; central limit theorem.

AMS MSC 2010:60J65; 60F17; 60F05.

Submitted to EJP on June 26, 2013, final version accepted on December 31, 2013.

1 Introduction

In this paper, we study the asymptotic distributions of random processes of the form Rt

0V(Bs)ds, withV some stationary random potential andBs, s∈[0,1]a standard Brow- nian motion independent ofV.

The corresponding discrete version is the Kesten-Spitzer model of random walk in random scenery [4] of the formWn =Pn

i=1ξSk. Here,Sk =X1+. . .+Xk is a random walk onZwith i.i.d. increments andξn, n∈Z, are i.i.d. and independent ofXi. When Xi andξi belong to the domain of attraction of certain stable laws, then after proper scalinga(n)−1W[nt]converges weakly asn→ ∞to a self-similar process with stationary increment. Non-stable limits may appear in that case. Assuming moreover thatξi has zero mean and finite variance, it is shown in [2] that(nlogn)12W[nt]converges weakly to a Brownian motion whend= 2. Whend≥3, the argument contained in [4] essentially proves thatn12W[nt]converges weakly to a Brownian motion.

The continuous version Xn(t) = a(n)−1Rnt

0 V(Bs)ds has been analyzed in [9] for piecewise constant potentials given byV(x) =ξ[x+U], whereξi are i.i.d. random vari- ables with zero mean and finite variance, andU is uniformly distributed in[0,1)d and independent ofξi. The results are similar to those obtained in the discrete setting. In [5], Kipnis-Varadhan proved central limit results in both the discrete and continuous settings for additive functionals of Markov processes. For the special case of Brownian motion in random scenery and by adapting the point of view of "medium seen from an

Columbia University, USA. E-mail:[email protected]

Columbia University, USA. E-mail:[email protected]

(2)

observer", their results can be applied to prove invariance principle for the most gen- eral class of V(x)when d ≥3, including the ones analyzed in [9]. For more relevant results and backgrounds, see [6] and the references therein.

In this paper, we consider two types of simple yet important potentials, namely the Gaussian and Poissonian potentials, and derive the asymptotic distributions ofXn(t) = a(n)−1Rnt

0 V(Bs)ds in all dimensions by method of characteristic functions. Since [8]

contains the results for d = 1 while [5] implies the results for d ≥ 3, our main con- tribution is the case d = 2. For Gaussian and Poissonian potentials, the method of characteristic functions offers a relatively simple proof, which we present in all dimen- sions.

There are several physical motivations for studying functionals of the formRt

0V(Bs)ds. We mention two examples here. The first is the parabolic Anderson modelut= 12∆u+ V uwith random potentialV and initial condition f. By Feynman-Kac formula, the so- lution can be written as u(t, x) = ExB{f(Bt) exp(Rt

0V(Bs)ds)}, where ExB denotes the expectation with respect to the Brownian motion starting from x. It is clear that the large time behavior of u(t, x) is affected by the asymptotics of Brownian functional exp(Rt

0V(Bs)ds), see e.g. the applications in the context of homogenization [7, 8, 3].

As a second example, if we look at the model of Brownian particle in Poissonian ob- stacle denoted byV, then the integral Rt

0V(Bs)dsmeasures the total trapping energy received by the particle up to timetandexp(−Rt

0V(Bs)ds)is used to define the Gibbs measure. For Brownian motion in Poissonian potential, many existing results are of large deviation type; see [10] for a review of such results.

The rest of the paper is organized as follows. We first describe the assumptions on the potentials and state our main theorems in section 2. We then prove the con- vergence of finite dimensional distributions and tightness results in section 3 for the non-degenerate case and section 4 for the degenerate case (when the power spectrum of the potential vanishes at the origin). We discuss possible applications and extensions of our results in section 5 and present some technical lemmas in an appendix.

Here are notations used throughout the paper. We write a.bwhen there exists a constantC independent of n such thata ≤ Cb. N(µ, σ2)denotes the normal random variable with mean µand variance σ2 and qt(x)is the density function of N(0, t). We usea∧b= min(a, b)anda∨b= max(a, b). For multidimensional integrations,Q

idxi is abbreviated asdx.

2 Problem setup and main results

The Gaussian and Poissonian potentials are denoted byVg(x)andVp(x), respectively, throughout the paper.

For the Gaussian case, we assumeVg(x)is stationary with zero mean and the covari- ance functionRg(x) =E{Vg(x+y)Vg(y)} is continuous and compactly supported. The power spectrum

g(ξ) = Z

Rd

Rg(x)e−iξxdx, and by Bochner’s theoremRˆg(0) =R

RdRg(x)dx≥0. For the Poissonian case, we assume

Vp(x) = Z

Rd

φ(x−y)ω(dy)−cp, (2.1)

whereω(dx)is a Poissonian field inRd with thed−dimensional Lebesgue measure as its intensity measure andφis a continuous, compactly supported shape function such

(3)

thatR

Rdφ(x)dx=cp. It is straightforward to check thatVp(x)is stationary and has zero mean, and its covariance function

Rp(x) =E{Vp(x+y)Vp(y)}= Z

Rd

φ(x+z)φ(z)dz (2.2) is continuous and compactly supported as well. The power spectrum

p(ξ) = Z

Rd

Rp(x)e−iξxdx,

and sinceR

Rdφ(x)dx=cp, we haveRˆp(0) =c2p≥0.

In the Poissonian case, the random fieldVp(x)is mixing in the following sense. For two Borel setsA, B ⊂Rd, letFA and FB denote the sub-σalgebras generated by the fieldVp(x)forx∈Aandx∈B, respectively. Then there exists a positive and decreasing functionϕ(r)such that

|Cor(η, ζ)| ≤ϕ(2d(A, B)) (2.3)

for all square integrable random variables η and ζ that are FA and FB measurable, respectively. The multiplicative factor2is only here for convenience. Actually, when|x|

is sufficiently large,Vp(x+y)is independent ofVp(y)and so the mixing coefficientϕ(r) can be chosen as a positive, decreasing function with compact support in [0,∞). We will use this in the estimation of the fourth moment ofVp(x).

The following theorems are our main results.

Theorem 2.1. Let Bt, t ≥ 0 be ad−dimensional standard Brownian motion indepen- dent of the stationary random potentialV(x), which is chosen to be either Gaussian or Poissonian, andR(x) =E{V(x+y)V(y)}be the covariance function,R(0) =ˆ R

RdR(x)dx. DefineXn(t) =a(n)−1Rnt

0 V(Bs)dswith the scaling factor

a(n) =

n34 d= 1, (nlogn)12 d= 2, n12 d≥3.

Then we have thatXn(t)converges weakly inC([0,1])toσdZtwith the following repre- sentations:

Whend= 1, thenZt=R

RLt(x)W(dx), whereLt(x)is the local time ofBtandW(dx) is a1−dimensional white noise independent ofLt(x); σd=

qR(0)ˆ . Whend= 2, thenZtis a standard Brownian motion; σd=

qR(0)/πˆ . Whend≥3, thenZtis a standard Brownian motion;

σd = s

1 πd2Γ(d

2 −1) Z

Rd

R(x)

|x|d−2dx.

We note that whend≥3, thenσd = q

4(2π)−dR

RdR(ξ)|ξ|ˆ −2dξ. SinceR(ξ)ˆ ≥0,σd>0 in both cases and the limit is nontrivial. Whend≤2, the limit is nontrivial ifR(0)ˆ 6= 0. In the degenerate case whenR(0) = 0ˆ , for instance ifcp = 0in the Poissonian case in d = 1,2, then the limit obtained in the previous theorem is trivial. The scaling factor a(n)should be chosen smaller to obtain a nontrivial limit. We prove the following result:

Theorem 2.2. In d = 1,2, let Bt, t ≥ 0 be a d−dimensional standard Brownian mo- tion independent of the stationary random potentialV(x), which is chosen to be either

(4)

Gaussian or Poissonian, andR(x) =E{V(x+y)V(y)}withR(ξ)|ξ|ˆ −2integrable. Define Xn(t) =n12Rnt

0 V(Bs)ds. Then we haveXn(t)converges weakly inC([0,1])toσWt, with Wta standard Brownian motion andσ=

q

4(2π)−dR

RdR(ξ)|ξ|ˆ −2dξ.

Remark 2.3. In the degenerate case, the scaling factorn12 is the same as ind≥3. Ind = 1, the limiting processes are different for the non-degenerate and degenerate cases. Since R(ξ)ˆ ∈ L1, for R(ξ)|ξ|ˆ −2 to be integrable, we only need to assume that R(ξ)ˆ .|ξ|αat the origin withα >1whend= 1andα >0whend= 2.

We will refer to Theorem 2.1 and 2.2 as non-degenerate and degenerate cases re- spectively in the following sections. The proof contains convergence of finite dimen- sional distributions and tightness result.

3 Non-degenerate case when d ≥ 1

3.1 Convergence of finite dimensional distributions

We first prove the weak convergence of finite dimensional distributions through the estimation of characteristic functions.

For any 0 = t0 < t1 < . . . < tN ≤ 1 and αi ∈ R, i = 1, . . . , N, by considering YN :=PN

i=1αi(Xn(ti)−Xn(ti−1)), we have the following explicit expressions.

In the Gaussian case, sinceYN =PN

i=1αia(n)−1Rnti

nti−1Vg(Bs)ds, we obtain E{exp(iθYN)}=E{E{exp(iθYN)|B}}

=E{exp(−1 2θ2

N

X

i,j=1

αiαj

1 a(n)2

Z nti nti−1

Z ntj ntj−1

Rg(Bs−Bu)dsdu)}. (3.1)

SinceE{exp(iθYN)|B}is bounded by1, to prove convergence ofE{exp(iθYN)}, we only need to prove the convergence in probability ofa(n)−2Rnti

nti−1

Rntj

ntj−1Rg(Bs−Bu)dsdu. In the Poissonian case, we write

YN = Z

Rd N

X

i=1

αi

1 a(n)

Z nti nti−1

φ(Bs−y)ds

!

ω(dy)− cp

a(n)

N

X

i=1

αi(nti−nti−1),

and straightforward calculations lead to

E{exp(iθYN)}=E{exp(

Z

Rd

(eiθFn(y)−1)dy)}exp(−iθ cp

a(n)

N

X

i=1

αi(nti−nti−1)),

whereFn(y) :=PN

i=1αia(n)−1Rnti

nti−1φ(Bs−y)ds. SinceR

Rdφ(x)dx=cp, we obtain E{exp(iθYN)}=E{E{exp(iθYN)|B}}=E{exp(

Z

Rd

X

k=2

1

k!(iθFn(y))kdy)}. (3.2) Similarly, E{exp(iθYN)|B} is bounded by 1, so it suffices to show the convergence in probability ofR

Rd

P k=2

1

k!(iθFn(y))kdy. Whenk= 2, Z

Rd

Fn(y)2dy=

N

X

i,j=1

αiαja(n)−2 Z nti

nti−1

Z ntj ntj−1

Rp(Bs−Bu)dsdu (3.3)

is the conditional variance ofYN givenBs. We will see that the proof of the Poissonian case implies the Gaussian case.

(5)

3.1.1 Poissonian cased= 1

Whend= 1, sinceVp is mixing, [8, Theorem 2] implies the result. We give a different proof using characteristic functions.

First of all, by scaling property of Brownian motion, we do not distinguish between Fn(y) =PN

i=1αin34Rnti

nti−1φ(Bs−y)dsandFn(y) =PN

i=1αin14Rti ti−1φ(√

nBs−y)ds. Using the second representation, we have

Fn(y) =

N

X

i=1

αin14 Z ti

ti−1

φ(√

nBs−y)ds=

N

X

i=1

αin14 Z

R

φ(√

nx−y)(Lti(x)−Lti−1(x))dx,

whereLt(x)is the local time ofBs. So Z

Rd

Fn(y)2dy=

N

X

i,j=1

αiαj√ n

Z

R2

Rp(√

n(z−x))(Lti(x)−Lti−1(x))(Ltj(z)−Ltj−1(z))dxdz.

(3.4) We have the following two propositions.

Proposition 3.1. R

RFn(y)2dy→Rˆp(0)PN

i,j=1αiαj

R

R(Lti(x)−Lti−1(x))(Ltj(x)−Ltj−1(x))dx almost surely.

Proposition 3.2. R

R

P k=3

1

k!(iθFn(y))kdy→0almost surely.

Proof of Proposition 3.1. For anyi, j= 1, . . . , N, we consider (i) :=√

n Z

R2

Rp(√

n(z−x))(Lti(x)−Lti−1(x))(Ltj(z)−Ltj−1(z))dxdz

= Z

R2

Rp(y)(Lti(x)−Lti−1(x))(Ltj(x+ y

√n)−Ltj−1(x+ y

√n))dydx,

and becauseLt(x)is continuous with compact support almost surely, we have (i)→Rˆp(0)

Z

R

(Lti(x)−Lti−1(x))(Ltj(x)−Ltj−1(x))dx,

which completes the proof.

Proof of Proposition 3.2. For a fixed realization ofBs, we have

|Fn(y)|.n14 Z

|x|<M

|φ(√

nx−y)|dx,

whereM is a constant depending on the realization and thus

| Z

R

Fn(y)kdy|

.nk4 Z

R

Z

[−M,M]k k

Y

i=1

|φ(√

nxi−y)|dxdy=nk4 Z

R

Z

[−M,M]k k

Y

i=1

|φ(√

n(xi−x1) +y)|dxdy

≤ 1 nk−24

Z

Rk

Z

[−M,M]

|φ(y)|

k

Y

i=2

|φ(xi−√

nx1+y)|dxdy . 1 nk−24 .

SinceP k=3

1 k!

|θ|k nk−24

→0asn→ ∞, the proof is complete.

(6)

Recalling (3.2), by Proposition 3.1 and 3.2, we have proved the almost sure conver- gence of the exponents. Therefore, by the Lebesgue dominated convergence theorem we have

E{exp(iθYN)}=E{exp(

Z

Rd

X

k=2

1

k!(iθFn(y))kdy)}

→E{exp(−1

2p(0)

N

X

i,j=1

αiαj Z

R

(Lti(x)−Lti−1(x))(Ltj(x)−Ltj−1(x))dx)}

=E{exp(iθσd N

X

i=1

αi(Zti−Zti−1))}

(3.5) whenZt=R

RLt(x)W(dx). 3.1.2 Poissonian cased≥2

When d ≥ 2, the local time does not exist, and to prove the convergence of the con- ditional variance of Xn(t)given Bs, we need to calculate fourth moments. First, we define

Vn=E{Xn(t)2|Bs, s∈[0, t]}= 1 a(n)2

Z nt 0

Z nt 0

Rp(Bs−Bu)dsdu (3.6) so thatE{Xn(t)2} =E{Vn}. The following two lemmas show that the conditional vari- ance converges in probability.

Lemma 3.3. E{Vn} →σd2tasn→ ∞. Lemma 3.4. E{V2n} →σd4t2asn→ ∞.

In the proofs, we deal withd= 2andd≥3in different ways. For the latter, we only use the fact thatRˆp(ξ)|ξ|−2is integrable and so the proof also applies in the degenerate case. BothRp(x)andRˆp(ξ)are even functions, a fact that we will use frequently in the proof.

Proof of Lemma 3.3. We first consider the cased = 2. For fixedx, by change of vari- ablesλ=|x|2u2, we have

E{Vn}= 2 a(n)2

Z nt 0

Z s 0

Z

Rd

Rp(x) 1

(2πu)d2e|x|

2 2u dxduds

= n

a(n)2 Z t

0

Z

Rd

Z

|x|2 2ns

Rp(x) 1

πd2λd2−2e−λ 1

|x|d−2dλdxds.

(3.7)

Sincea(n) = (nlogn)12, by integrations by parts inλ, we have E{Vn}= 1

logn Z t

0

Z

Rd

1

πRp(x) e|x|

2

2nslog2ns

|x|2+ Z

|x|2 2ns

e−λlogλdλ

!

dxds→ t π

p(0) (3.8) by the Lebesgue dominated convergence theorem.

Consider now the cased≥3. Then,a(n) =n12 and by Fourier transform, we have E{Vn}= 1

(2π)dn Z

[0,nt]2

Z

Rd

p(ξ)e12|ξ|2|s−u|dξdsdu

= 4 (2π)d

Z

Rd

p(ξ)

|ξ|2 Z t

0

(1−e12|ξ|2ns)dsdξ→ 4t (2π)d

Z

Rd

p(ξ)

|ξ|2

(3.9)

asn→ ∞.

(7)

Proof of Lemma 3.4. By symmetry ofR(x), we write V2n=a(n)−4

Z

[0,nt]4

Rp(Bs1−Bs2)Rp(Bs3−Bs4)ds= 8((i) + (ii) + (iii)), where

(i) = 1 a(n)4

Z

0<s1<s2<s3<s4<nt

Rp(Bs1−Bs2)Rp(Bs3−Bs4)ds, (3.10) (ii) = 1

a(n)4 Z

0<s1<s3<s2<s4<nt

Rp(Bs1−Bs2)Rp(Bs3−Bs4)ds, (3.11) (iii) = 1

a(n)4 Z

0<s1<s3<s4<s2<nt

Rp(Bs1−Bs2)Rp(Bs3−Bs4)ds. (3.12) We consider first the cased= 2.

(i): for fixed x, y, by change of variables u1 = sn1, u3 = s3−sn 2, λ2 = 2(s|x|2

2−s1), λ4 =

|y|2

2(s4−s3), we have E{(i)}= n2

a(n)4 Z

R4+

Z

R2d

Rp(x)

|x|d−2 Rp(y)

|y|d−2 1

dλ2d2−2e−λ2λ4d2−2e−λ4 10≤u1+u3≤t;|x|2

2+|y|2

4≤n(t−u1−u3)dxdydλ24du1du3. We define

f(c) = 1 (logn)2

Z

R2+

1 λ2

e−λ2 1 λ4

e−λ41|x|2

2≤cn(t−u1−u3);|y|2

4≤cn(t−u1−u3)24

forc >0. Using integrations by parts,f(c)→1asn→ ∞as long asx, y6= 0, u1+u3< t. Moreover,f(c).(1 +|logc(t−u1−u3)|+|log|x||)(1 +|logc(t−u1−u3)|+|log|y||). On the other hand, we note that

f(1

2)≤ 1 (logn)2

Z

R2+

1 λ2e−λ2 1

λ4e−λ41|x|2 2+|y|2

4≤n(t−u1−u3)24≤f(1), so by the Lebesgue dominated convergence theorem, we haveE{(i)} → t22p(0)2.

(ii): by a similar change of variables as for(i), we have E{(ii)}= n2

a(n)4 Z

R4+

Z

R3d

Rp(x−z)

|x|d−2

Rp(y−z)

|y|d−2 1

dλ2d2−2e−λ2λ4d2−2e−λ4 10≤u1+u3≤t;|x|2

2+|y|2

4≤n(t−u1−u3)qnu3(z)dxdydzdλ24du1du3. By a change of variables and integration by parts inλ2, λ4, we have

|E{(ii)}|

. n

logn 2Z

[0,t]2

Z

R6

|Rp(√

n(x−z))Rp(√

n(y−z))|qu3(z) e|x|

2 2u log 2u

|x|2+ Z

|x|2 2u

logλe−λ

!

e|y|

2 2u log 2u

|y|2+ Z

|y|2 2u

logλe−λ

!

dudu3dxdydz.

Note thate|x|

2

2u log|x|2u2+R

|x|2 2u

logλe−λdλ.1 +|logu|+|log|x||. By Lemma A.1, we have n

logn Z

R2

|Rp(√

n(x−z))| e|x|

2 2u log 2u

|x|2 + Z

|x|2 2u

logλe−λ

! dx

. 1 logn

1 +|logu|+|log|z||+ logn1|z|<2 n

.

(8)

The integral inyis controlled in the same way and we obtain

|E{(ii)}|. Z

[0,t]2

Z

R2

1 (logn)2

1 +|logu|+|log|z||+ logn1|z|<2 n

2

qu3(z)dzdudu3. So|E{(ii)}| →0asn→ ∞.

(iii): by a similar change of variables as for(i)and by symmetry ofR(x), we have E{(iii)}= n2

a(n)4 Z

R4+

Z

R3d

Rp(x−y+z)

|x|d−2

Rp(y)

|y|d−2 1

dλ2d2−2e−λ2λ4d2−2e−λ4 10≤u1+u3≤t;|x|2

2+|y|2

4≤n(t−u1−u3)qnu3(z)dxdydzdλ24du1du3. After integrations by parts inλ2, λ4, we have

|E{(iii)}|

. n

logn 2Z

[0,t]2

Z

R6

|Rp(√

n(x−y+z))Rp(√

ny)|qu3(z) e|x|

2 2u log 2u

|x|2 + Z

|x|2 2u

logλe−λ

!

e|y|

2 2u log 2u

|y|2+ Z

|y|2 2u

logλe−λ

!

dudu3dxdydz.

Note thate|x|

2

2u log|x|2u2+R

|x|2 2u

logλe−λdλ.1 +|logu|+|log|x||. By applying Lemma A.1 to the integral inx, we have

n logn

Z

R2

|Rp(√

n(x−(y−z)))| e|x|

2 2u log 2u

|x|2 + Z

|x|2 2u

logλe−λ

! dx

. 1 logn

1 +|logu|+|log|y−z||+ logn1|y−z|<2 n

.

So

|E{(iii)}|. n (logn)2

Z

[0,t]2

Z

R4

1 +|logu|+|log|y−z||+ logn1|y−z|<2 n

|Rp(√

ny)|qu3(z) (1 +|logu|+|log|y||)dydzdu3du.

Since|Rp(√

ny)|.1∧ |√

ny|−αfor someα >2, by Lemma A.1, we knowE{(iii)} →0as n→ ∞.

We now consider the cased≥3.

(i): after Fourier transform and changing of variablesui=si−si−1 fori= 1,2,3,4 withs0= 0, we derive

E{(i)}= 1 (2π)2dn2

Z

R4+

Z

R2d

1P4

i=1ui≤ntp1) ˆRp2)e121|2u2e122|2u412du

= 1

(2π)2d Z

R2+

Z

R2d

p1) ˆRp2)10≤u1+u3≤tFn(1 2|ξ1|2,1

2|ξ2|2, t−u1−u3)dξ12du, whereFn(a, b, t) :=R

R2+10≤s+u≤nte−ase−budsdufora≥0, b≥0. It is straightforward to check thatabFn(a, b, t)is uniformly bounded andFn(a, b, t)→ ab1 asn→ ∞. Thus,

E{(i)} → 1 (2π)2d

Z

R2+

Z

R2d

p1) ˆRp2)10≤u1+u3≤t

4

1|22|212du1du3

= 2t2 (2π)2d

Z

Rd

p(ξ)

|ξ|2

!2 .

(9)

(ii): similarly we have E{(ii)}

= 1

(2π)2dn2 Z

R4+

Z

R2d

1P4

i=1ui≤ntp1) ˆRp2)e121|2u2e1212|2u3e122|2u412du .t

Z

R2d

Z t 0

p1) ˆRp2)

1|22|2 e1212|2nu3du312→0 asn→ ∞.

(iii): by the same change of variables, we obtain E{(iii)}

= 1

(2π)2dn2 Z

R4+

Z

R2d

1P4

i=1ui≤ntp1) ˆRp2)e121|2u2e1212|2u3e121|2u412du .t

n Z

R2d

Z

R2+

1u3+u4≤nt

p1)

1|2p2)e1212|2u3e121|2u4du3du412

.t n

Z

R2d

Z

R2+

1u3+u4≤nt

p1) ˆRp2)

1|22|2 (|ξ12|2+|ξ1|2)e1212|2u3e121|2u4du3du412

≤t Z

R2d

Z t 0

Z

R+

p1) ˆRp2)

1|22|212|2e1212|2u3e121|2nu4du3du412

+t Z

R2d

Z t 0

Z

R+

p1) ˆRp2)

1|22|21|2e121|2u4e1212|2nu3du4du312→0 asn→ ∞.

To summarize, we have shown thatE{V2n} →σd4t2. The proof is complete.

Remark 3.5. The proof of Lemma 3.4 only requiresR(x)to be symmetric, bounded, and to satisfy certain integrability condition. In particular, ifR(x)is compactly supported, then the result holds. This will be used in the proof of tightness.

The following lemma proves that those cross terms appearing in the conditional variance vanish in the limit. Whend≥3, as in the proof of Lemma 3.3 and 3.4, we use the Fourier transform and the integrability ofRˆp(ξ)|ξ|−2 so that the proof also applies to the degenerate case.

Lemma 3.6. a(n)12

Rnti nti−1

Rntj

ntj−1Rp(Bs−Bu)dsdu→0in probability wheni6=j. Proof. Assumei > j,

Consider the cased= 2. For fixedx, u, by change of variablesλ= 2(s−u)|x|2 , we have 1

a(n)2 Z nti

nti−1

Z ntj ntj−1

E{|Rp(Bs−Bu)|}dsdu

= n

a(n)2 Z

Rd

Z

R2+

1(2n(|x|2

ti−u),2n(|x|2

ti−1−u))(λ)1(tj−1,tj)(u) 1 2πd2

|Rp(x)|

|x|d−2 λd2−2e−λdλdudx.

Recalling thata(n) =√

nlogn, an integration by parts leads to Z

R

1(2n(|x|2

ti−u),2n(|x|2

ti−1−u))(λ)λ−1e−λdλ.1 + logn+|log(ti−u)|+|log(ti−1−u)|+|log|x||, and log1nR

R1

(2n(|x|2

ti−u),2n(|x|2

ti−1−u))(λ)λ−1e−λdλ → 0 as n → ∞. We apply the dominated convergence theorem to conclude the proof.

(10)

Consider now the cased≥3and (i) := 1

n2 Z

R4+

1s1,s2∈[nti−1,nti]1u1,u2∈[ntj−1,ntj]Rp(Bs1−Bu1)Rp(Bs2−Bu2)dsdu.

We show E{(i)} → 0 so the cross term goes to zero in probability. Actually, we have (i) = 2((I) + (II)), where

(I) = 1 n2

Z

R4+

1ntj−1≤u2≤u1≤ntj1≤nti−1≤s2≤s1≤ntiRp(Bs1−Bu1)Rp(Bs2−Bu2)dsdu,

(II) = 1 n2

Z

R4+

1ntj−1≤u1≤u2≤ntj1nti−1≤s2≤s1≤ntiRp(Bs1−Bu1)Rp(Bs2−Bu2)dsdu.

For(I)we have E{(I)}= 1

(2π)2dn2 Z

R4+

Z

R2d

1ntj−1≤u2≤u1≤ntj1≤nti−1≤s2≤s1≤ntip1) ˆRp2)

e121|2(s1−s2)e1212|2(s2−u1)e122|2(u1−u2)12dsdu,

which impliesE{(I)} .(tj−tj−1)R

R2d

Rti−tj−1

ti−1−tj

Rˆp1) ˆRp2)

1|22|2 e1212|2nududξ12 →0as n→ ∞. Similarly, for(II)we have

E{(II)}= 1 (2π)2dn2

Z

R4+

Z

R2d

1ntj−1≤u1≤u2≤ntj1≤nti−1≤s2≤s1≤ntip1) ˆRp2)

e121|2(s1−s2)e1212|2(s2−u2)e121|2(u2−u1)12dsdu, so

E{(II)}

.1 n

Z

[ntj−1,nti]2

Z

R2d

p1) ˆRp2)

1|22|2 (|ξ12|2+|ξ1|2)e1212|2u1e121|2u212du1du2 .(ti−tj−1)

Z

R2d

Z ti

tj−1

p1) ˆRp2)

1|22|2 e1212|2nududξ12

+(ti−tj−1) Z

R2d

Z ti tj−1

p1) ˆRp2)

1|22|2 e121|2nududξ12→0 asn→ ∞.

The two following propositions holds, soR

Rd

P k=2

1

k!(iθFn(y))kdyconverges in prob- ability.

Proposition 3.7. R

RdFn(y)2dy→PN

i=1αi2σ2d(ti−ti−1)in probability.

Proposition 3.8. R

Rd

P k=3

1

k!(iθFn(y))kdy→0in probability.

Proof of Proposition 3.7. Note that Z

Rd

Fn(y)2dy=

N

X

i,j=1

αiαj

1 a(n)2

Z nti nti−1

Z ntj ntj−1

Rp(Bs−Bu)dsdu,

wheni=j, Lemma 3.3 and 3.4 lead to 1

a(n)2 Z nti

nti−1

Z nti nti−1

Rp(Bs−Bu)dsdu→σ2d(ti−ti−1)

(11)

in probability asn→ ∞.

Wheni6=j, by Lemma 3.6, we have 1

a(n)2 Z nti

nti−1

Z ntj

ntj−1

Rp(Bs−Bu)dsdu→0 in probability asn→ ∞. The proof is complete.

Proof of Proposition 3.8. We will useCfor possibly different constants in the following estimation. Recall thatFn(y) = PN

i=1αi 1 a(n)

Rnti

nti−1φ(Bs−y)ds, so we have |Fn(y)| ≤ Ca(n)1 Rn

0 |φ(Bs−y)|ds, and thus Z

RdE{|Fn(y)|k}dy≤Ck Z

Rd

1 a(n)k

Z

[0,n]kE{

k

Y

i=1

|φ(Bsi−y)|}dsdy. (3.13) From now on, we useRHS to denote the RHS of (3.13). By change of variablesui = si−si−1 fori= 1, . . . , k withs0 = 0, andλi = |x2ui|2

i fori= 2, . . . , kwhenxiis fixed, we have

RHS =Ckk!

a(n)k Z

R(k+1)d

Z

Rk+

1Pk

i=1ui≤n|φ|(y)|φ|(x2+y). . .|φ|(

k

X

i=2

xi+y)

k

Y

i=1

qui(xi)dudxdy

=Ckk!

a(n)k Z

Rkd

Z

Rk+

1u1+Pk i=2

|xi|2

2λi ≤n|φ|(y)|φ|(x2+y)

|x2|d−2 . . .|φ|(Pk

i=2xi+y)

|xk|d−2

k

Y

i=2

1

d2λid2−2e−λidu1dλdxdy.

When d ≥ 3, note that R

Rd|φ|(y+x)|y|2−ddy is uniformly bounded in x, so after inte- gration inxk, . . . , x2, yand λ2, . . . , λk, we haveRHS ≤Ckk!nk2+1 where the factorn comes from the integration inu1. This leads to

E{|

Z

Rd

X

k=3

1

k!(iθFn(y))kdy|} ≤

X

k=3

|Cθ|k 1 nk2−1 →0 asn→ ∞.

Whend= 2, we have RHS ≤Ck nk!

a(n)k Z

Rkd

Z

Rk−1+

|φ|(y)|φ|(x2+y). . .|φ|(

k

X

i=2

xi+y)

k

Y

i=2

1 2π1

λi|xi2n|2

1

λie−λidλdxdy.

(3.14) By integration by parts, we have (logn)1k−1

R

Rk−1+

Qk i=2

1 1

λi|xi2n|2 1

λie−λidλ . Qk i=2(1 +

|log|xi||). Sinceφ is compactly supported, we know thatxi, i= 2, . . . , k are uniformly bounded. After integration inxk, . . . , x2, y, we haveRHS≤Ckk!logn

n

k2−1

. So

E{|

Z

Rd

X

k=3

1

k!(iθFn(y))kdy|} ≤

X

k=3

|Cθ|k logn

n k2−1

→0 asn→ ∞. The proof is complete.

Remark 3.9. In(3.14), if we choosea(n) =n12 instead ofa(n) = (nlogn)12, by the same calculation we still have

E{|

Z

Rd

X

k=3

1

k!(iθFn(y))kdy|} ≤

X

k=3

|Cθ|klognk−1

nk2−1 →0, (3.15) and this could be used in the proof for the degenerate Poissonian case whend= 2.

参照

関連したドキュメント

Some new results concerning semilinear differential inclusions with state variables constrained to the so-called regular and strictly regular sets, together with their applications,

In Section 3, we deal with the case of BSDEs with fixed terminal time&#34; we prove an existence and uniqueness result and establish some a priori estimates for the solutions of

In particular, Section 4.1 deals with multiple Poisson integrals, Section 4.2 with de Jong’s theorem for degenerate U-statistics and Section 4.3 with non-degenerate U-statistics

Next, we prove bounds for the dimensions of p-adic MLV-spaces in Section 3, assuming results in Section 4, and make a conjecture about a special element in the motivic Galois group

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A

Using a ltration of Outer space indicated by Kontsevich, we show that the primitive part of the homology of the Lie graph complex is the direct sum of the cohomologies of Out(F r ),

His idea was to use the existence results for differential inclusions with compact convex values which is the case of the problem (P 2 ) to prove an existence result of the

In the case of the KdV equation, the τ -function is a matrix element for the action of the loop group of GL 2 on one-component fermionic Fock space, see for instance [10, 20, 26]..