• 検索結果がありません。

1Introductionandresults ChristopheGallesco SergueiPopov RandomwalkswithunboundedjumpsamongrandomconductancesI:UniformquenchedCLT

N/A
N/A
Protected

Academic year: 2022

シェア "1Introductionandresults ChristopheGallesco SergueiPopov RandomwalkswithunboundedjumpsamongrandomconductancesI:UniformquenchedCLT"

Copied!
22
0
0

読み込み中.... (全文を見る)

全文

(1)

El e c t ro nic J

o f

Pr

ob a bi l i t y

Electron. J. Probab.17(2012), no. 85, 1–22.

ISSN:1083-6489 DOI:10.1214/EJP.v17-1826

Random walks with unbounded jumps among random conductances I: Uniform quenched CLT

Christophe Gallesco

Serguei Popov

Abstract

We study a one-dimensional random walk among random conductances, with un- bounded jumps. Assuming the ergodicity of the collection of conductances and a few other technical conditions (uniform ellipticity and polynomial bounds on the tails of the jumps) we prove a quencheduniform invariance principle for the random walk.

This means that the rescaled trajectory of lengthnis (in a certain sense) close enough to the Brownian motion, uniformly with respect to the choice of the starting location in an interval of lengthO(√

n)around the origin.

Keywords:ergodic environment; unbounded jumps; hitting probabilities; exit distribution.

AMS MSC 2010:60J10, 60K37.

Submitted to EJP on February 21, 2012, final version accepted on September 29, 2012.

1 Introduction and results

Suppose that for each pair of integers we are given a nonnegative number. One may think that sites of Z are nodes of an electrical network where any site can be connected to any other site, and those numbers are thought of as the conductances of the corresponding links. The conductances are initially chosen at random, and we call the set of the conductances random environment. For the random environment, we assume that it is stationary and ergodic. Given the conductances, one then defines a (reversible) discrete-time random walk in the usual way: the transition probability fromxtoyis proportional to the conductance betweenxandy.

Here and in the companion paper [15] we study one-dimensional random walks among random conductances informally described above, with unbounded jumps (we impose a condition that implies that the conductances can decay polynomially in the distance between the sites, but with sufficiently large power). The main result of [15]

concerns the (quenched) limiting law of the trajectory of the random walk (Xn, n = 0,1,2, . . .)starting from the origin up to timen, under condition that it remains positive at the moments1, . . . , n. In [15] we prove that, after suitable rescaling, for a.e. envi- ronment it converges to theBrownian meander process, which is, roughly speaking, a Brownian motion conditioned on staying positive up to some finite time. It turns out

University of Campinas – UNICAMP, Brazil. E-mail:gallesco,popov@ime.unicamp.br

(2)

that one of the main ingredients for the proof of the conditional CLT is the uniform quenched CLT, which is the main result (Theorem 1.2) of the present paper.

Our main motivation for considering one-dimensional random walks with unbounded jumps among random conductances and with minimal assumptions on the environment comes from Knudsen billiards in random tubes, see [9, 10, 11, 12]. This model can be regarded as a discrete-time Markov chain with continuous space, the positions of the walker correspond to places where a billiard ball with arandom law of reflection of certain form hits the boundary. This model has some nice reversibility properties that makes it in some sense a continuous space analogue of the random walk among random conductances. In (rather long and technical) Section 3 of [12] the following problem was treated: given that the particle, injected at the left boundary of the tube, crosses the tube of lengthH without returning to the starting boundary, then the crossing time exceedsεH2 with high probability, as ε → 0, H → ∞. Of course, such a fact would be an easy consequence of a conditional limit theorem similar to the one described above (since the probability that the Brownian meander behaves that way is high). We decided to study the discrete-space model because it presents less technical difficulties than random billiards in random tubes, and therefore allows to obtain finer results (such as the conditional CLT).

(Unconditional) quenched Central Limit Theorems for this and related models (even in the many-dimensional case) received much attention in the recent years, see e.g. [2, 1, 5, 3, 6, 20, 19]. Mainly, the modern approach consists in constructing a so-called cor- rector function which turns the random walk to a martingale, and then using the CLT for martingales. To construct the corrector, one can use the method of orthogonal pro- jections [6, 20, 19]. While the corrector method is powerful enough to yield quenched CLTs, its construction by itself is not very explicit, and, in particular, it does not say a lot about the speed of convergence. Besides that, it is, in principle, not very clear how the speed of convergence depends on the starting point. For instance, one may imagine that there are “distant” regions where the environment is “atypical”, and so is the be- havior of the random walk starting at a point from such a region until the time when it comes to “normal” parts of the environment. In Theorem 1.2 we prove that, for rather general ergodic environments that admit unbounded jumps, if the rescaled trajectory by timenis “close” enough to the Brownian motion, then so are the trajectories start- ing from points of an interval of lengthO(√

n) centered in the origin. In our opinion, this result is interesting by its own, but for us the main motivation for investigating this question was that it provides an important tool for proving the conditional CLT. Indeed, the strategy of the proof of the conditional CLT is to force it a bit away (aroundε√

n) from the origin in a “controlled” way and then use the usual (unconditional) CLT; but then it is clear that it is quite convenient to have the CLT forallstarting positions in an interval of lengthO(√

n)at once.

It is important to note that in the most papers about random walks with random con- ductances one assumes that the jumps are uniformly bounded, usually nearest-neighbor (one can mention e.g. [1, 7] that consider the case of unbounded jumps). When there is no uniform bound on the size of the jumps, this of course brings some complications to the proofs, as one can see in the proof of Theorem 1.2 below. Still, in our opinion, it is important to be able to obtain “usual” results for the case of long-range jumps as well;

for example, in some related models, such as the above-mentioned reversible random billiards in random tubes [11, 12] the jumps are naturally unbounded.

In the case when (in dimension 1) the jumps are uniformly bounded, the proofs become much simpler, mainly because one does not need to bother about the exit dis- tributions, as in Section 2.3. The case of nearest-neighbor jumps is, of course, even simpler, since many quantities of interest have explicit expressions. We will not discuss

(3)

this case separately, since it is (in some sense) “too easy” and does not provide a lot of clues about how the walk with unbounded jumps should be treated. Let us make an ob- servation that a random walk with nearest-neighbor jumps becomes a very interesting and complex object to study if one samples at random not the conductances, but the transition probabilities themselves (i.e., the transition probabilities fromnton+ 1are chosen independently before the process starts). The resulting random walk, while still reversible, behaves quite differently (in particular, diffusive limits are unusual for that model). We only mention thatconditional (on being at the origin at time2t) behavior of this random walk in the transient case was studied in [16], and a similar result for the recurrent case can be obtained from Corollary 2.1 of [8].

Of course, a natural question is whether the result analogous to Theorem 1.2 also holds for the many-dimensional nearest-neighbor random walk among random conduc- tances. We postpone the discussion about that to the end of this section.

Now, we define the model formally. For x, y ∈ Z, let us denote byωx,y = ωy,x the conductance betweenxandy. Defineθzωx,y := ωx+z,y+z, for all z ∈ Z. Note that, by Condition K below, the vectorsωx,·are elements of the Polish space`2(Z). We assume that (ωx,·)x∈Z is a stationary ergodic (with respect to the family of shifts θ) sequence of random vectors; Pstands for the law of this sequence and

·

P is the expectation with respect to P. The collection of all conductances ω = (ωx,y, x, y ∈ Z) is called theenvironment. For allx∈Z, defineCx:=P

yωx,y. Given thatCx<∞for allx∈Z (which is always so by Condition K below), the random walkXin random environmentω is defined through its transition probabilities

pω(x, y) =ωx,y

Cx ;

that is, ifPxωis the quenched law of the random walk starting fromx, we have Pxω[X0=x] = 1, Pxω[Xk+1=z|Xk=y] =pω(y, z).

Clearly, this random walk is reversible with the reversible measure(Cx, x ∈Z). Also, we denote byExω the quenched expectation for the process starting fromx. When the random walk starts from0, we use shortened notationsPω,Eω.

In order to prove our results, we need to make two technical assumptions on the environment:

Condition E. There existsκ >0such that,P-a.s.,ω0,1≥κ.

Condition K. There exist constants K, β > 0 such that, P-a.s., ω0,y1+yK3+β, for all y ≥ 0. Note, for future reference, that the stationarity ofP and Conditions E and K together imply that there existsκ >ˆ 0such that,P-a.s.,

ˆ κ≤X

y∈Z

ω0,y≤κˆ−1. (1.1)

We decided to formulate Condition E this way because, due to the fact that this work was motivated by random billiards, the main challenge was to deal with the long-range jumps. It is plausible that Condition E could be relaxed to some extent; however, for the sake of cleaner presentation of the argument, we prefer not trying to deal withboth long-range jumps and the lack of nearest-neighbor ellipticity.

Next, for alln≥1, we define the continuous mapZn = (Zn(t), t∈R+)as the natural polygonal interpolation of the mapk/n7→σ−1n−1/2Xk(withσfrom Theorem 1.1 below).

In other words,

σ√

nZtn=Xbntc+ (nt− bntc)Xbntc+1

withb·cthe integer part. Also, we denote byW the standard Brownian motion.

First, we state the following result, which is the usual quenched invariance principle:

(4)

Theorem 1.1. Assume Conditions E and K. Then, there exists a finite (nonrandom) constantσ >0such that forP-almost allω,Znconverges in law, underPω, to Brownian motionW asn→ ∞.

Of course, with the current state of the art in this field, obtaining the proof of The- orem 1.1 is a mere exercise (one can follow e.g. the argument of [6]); for this reason, we do not write the proof of Theorem 1.1 in this paper. The key observation, though, is that Condition K implies that

D X

y∈Z

y2ω0,y

E

P

<∞.

Let C(R+) be the space of continuous functions from R+ into R. Let us denote byCb(C(R+),R)(respectively,Cub(C(R+),R)) the space of bounded continuous (respec- tively, bounded uniformly continuous) functionals fromC(R+)intoRand byBthe Borel σ-field onC(R+). We have the following result, which is referred to as quenched Uni- form Central Limit Theorem (UCLT):

Theorem 1.2. Under Conditions E and K, the following statements hold:

(i) we haveP-a.s., for allH >0and anyF ∈Cb(C(R+),R),

n→∞lim sup

x∈[−H n,H

n]

Eθxω[F(Zn)]−E[F(W)]

= 0;

(ii) we haveP-a.s., for allH >0and anyF ∈Cub(C(R+),R),

n→∞lim sup

x∈[−H n,H

n]

Eθxω[F(Zn)]−E[F(W)]

= 0;

(iii) we haveP-a.s., for allH >0and any closed setB ∈ B, lim sup

n→∞

sup

x∈[−H n,H

n]

Pθxω[Zn∈B]≤P[W ∈B];

(iv) we haveP-a.s., for allH >0and any open setG∈ B, lim inf

n→∞ inf

x∈[−H n,H

n]

Pθxω[Zn ∈G]≥P[W ∈G];

(v) we haveP-a.s., for allH >0and anyA∈ Bsuch thatP[W ∈∂A] = 0,

n→∞lim sup

x∈[−H n,H

n]

Pθxω[Zn∈A]−P[W ∈A]

= 0.

Even though it may be possible to find a concise formulation of our main result with only one “final” statement and not a list of equivalent ones (the authors did not succeed in finding it), we content ourselves in writing it in this form because, in our opinion, possible situations where it can be useful are covered by the list. Of course, item (ii) is redundant (it follows trivially from (i)), and (iii) and (iv) are equivalent by complementation.

For the model of this paper, it is not possible to generalize Theorem 1.2 by con- sidering a wider interval[−Hnα, Hnα] for some α > 1/2, where the starting point is taken; this is because we only assume the ergodicity of the environment of conduc- tances. Indeed, consider e.g. a nearest-neighbor random walk, and suppose that the conductances can assume only two possible values, say,1and 2. To construct the sta- tionary ergodic random environment, we first construct its cycle-stationary version in

(5)

the following way. Fixε > 0 such that 1+εα > 12, and let us divide the edges ofZ into blocks of random i.i.d. sizes(Vi, i∈Z), withP[V1 > s] =O(s−(1+ε)). Inside each block, we toss a fair coin and, depending on the result, either place all 1s, or an alternating sequence of 2s and 1s. Since the expected size of the block is finite, it is clear that this environment can be made stationary (and, of course, ergodic) by a standard ran- dom shift procedure, see e.g. Chapter 8 of [22]. Then, one readily obtains that in the interval[−Hnα, Hnα], with large probability, one finds both1. . .1-blocks and212. . .12- blocks of length at least√

n. So, the UCLT cannot be valid: just consider starting points in the middle of two blocks of different type. It is, in our opinion, an interesting problem to obtain a stronger form of Theorem 1.2 in the case when the environment has mixing or independence properties. It seems plausible that one can make the above interval at least polynomially (with any power) wide, but we prefer not to discuss further questions of this type in this paper: in any case, for the results of [15], Theorem 1.2 is already enough.

Let us also comment on possible many-dimensional variants of Theorem 1.2. For the case of nearest-neighbor random walks inZd with random conductances bounded from both sides by two positive constants, an analogous result was obtained in [14]

(Theorem 1.1). The proof of Theorem 1.1 of [14] relies on the uniform heat-kernel bounds of [13]; one uses these bounds to obtain that, regardless of the starting point, with probability close to1 the walk will enter to the set of “good” sites (i.e., the sites from where the convergence is good enough). Naturally, this poses the question of what to do with unbounded conductances (and/or unbounded jumps), to which we have no answer for now (although one can expect, as usual, that the cased= 2should be more accessible, since in this case each site is “surrounded” by “good” sites, cf. e.g. the proof of Theorem 4.7 in [5]).

The paper is organized in the following way: in the next section, we obtain some auxiliary facts which are necessary for the proof of Theorem 1.2 (recurrence, estimates on the probability of confinement in an interval, estimates on the exit measure from an interval). Then, in Section 3, we give the proof of Theorem 1.2.

We will denote byK1,K2,. . . the “global” constants, that is, those that are used all along the paper, and byγ12,. . . the “local" constants, that is, those that are used only in the subsection in which they appear for the first time. For the local constants, we restart the numeration in the beginning of each subsection. Depending on the context, expressions likex∈[−H√

n, H√

n]should be understood asx∈[−H√ n, H√

n]∩Z.

2 Auxiliary results

In this section, we will prove some technical results that will be needed later to prove Theorem 1.2. Let us introduce the following notations. IfA⊂Z,

τA:= inf{n≥0 :Xn ∈A} and τA+:= inf{n≥1 :Xn∈A}.

2.1 Recurrence of the random walk

Lemma 2.1. Under Conditions E and K the random walkX isP-a.s. recurrent.

Proof. To show the recurrence of the random walk, we will show that the probability of escape to infinity is zero. First, let us consider the finite intervalIL = [−L, L]for some L > 0. Consider the timeτILc of exit from the intervalIL. By the Dirichlet variational principle for reversible Markov chains (see, for example, Theorem 6.1 Chap. II of [18]) we have that

2C0PωILc < τ0+] = min

f∈HΦ(f) (2.1)

(6)

whereΦis the Dirichlet form defined by Φ(f) := X

x,y∈Z

ωx,y[f(x)−f(y)]2

andHis the following set of functions:

H:={f :Z→[0,1] :f(0) = 0andf(x) = 1forx /∈IL}.

In order to estimatePωILc < τ0+]from above let us consider the functionhinHdefined by

h(x) =

L−1|x|, if|x| ≤L, 1, if|x|> L.

Now, let us estimateΦ(h). We start by writing Φ(h) = X

x,y∈Z

ωx,y[h(x)−h(y)]2

= X

−L<x,y<L

ωx,y[h(x)−h(y)]2+ 2 X

x∈(−∞,−L]∪[L,∞)

X

y∈(−L,L)

ωx,y[h(x)−h(y)]2. (2.2)

We are going to show that both terms in the decomposition (2.2) ofΦ(h) are of order smaller or equal toL−1. Indeed, it is not difficult to see that each of both terms in the decomposition (2.2) is smaller than

2 L2

X

−L<x<L

X

y∈Z

ωx,y(y−x)2.

By Condition K, we obtain that there exists a constantγ1such thatP-a.s.,P

y∈Zωx,y(y− x)2≤γ1for allx. Then, we deduce

Φ(h)≤ 8γ1

L . Using (2.1), we obtain that,

2C0PωIc

L < τ0+]≤8γ1

L . By (1.1), we haveC0≥ˆκso that

PωILc < τ0+]≤ 4γ1

ˆ

κL. (2.3)

Now, letpesc be the probability that the walk started at0 escapes to infinity. We have pesc = limL→∞PωIcL < τ0+], and so, by (2.3), pesc = 0. Hence, the random walkX is P-a.s. recurrent.

2.2 Probability of confinement

LetI= [a, b]⊂Zbe a finite interval containing at least 3 points and letB= (−∞, a]

andE= [b,∞). In this subsection we shall prove the following

Proposition 2.2. There exist constantsK1>0andK2>0such that we haveP-a.s., max

x∈(a,b)PxωB∪E> n]≤expn

− n

K1(b−a)2 o

for alln > K2(b−a)2.

(7)

Proof. Letωbe a realization of the random environment. Consider the new environment obtained fromωby deleting all the conductancesωx,y ifxandy belong toB∪E. The reversible measureC0 on this new environment is given by

Cx0 =

Cx, ifx∈(a, b), P

y /∈B∪Eωx,y, otherwise. Next, we defineCB0 := P

y∈BCy0 and πB(x) := Cx0/CB0 for all x∈ B. Observe that, by Conditions E and K,CB0 is positive and finiteP-a.s. Hence, it holds thatπB isP-a.s. a probability measure onB. In the same way we define the probability measureπEonE. Now, we introduce a new Markov chainX0on a finite state spaceS0:= (a, b)∪ {∆B,∆E} (∆B and∆E are the states corresponding toB andE). OnS0, we define the following transition probabilities: ifx /∈ {∆B,∆E}

Px,∆E =X

y∈E

ωx,y

Cx0 , Px,∆B =X

y∈B

ωx,y

Cx0 and

PE,x =X

y∈E

πE(y)ωx,y

Cy0 , PB,x=X

y∈B

πB(y)ωy,x

Cy0 .

Then, set PE,∆B = PB,∆E = PB,∆B = PE,∆E = 0. For x /∈ {∆B,∆E} and y /∈ {∆B,∆E} we just setPx,y = ωx,y/Cx0. Defining C0B := CB0 and C0 E := CE0 , we can easily check that the detailed balance equations are satisfied, that is, onS0 we have a new set of conductancesω0 specified byωx,y0 :=Cx0Px,y =Cy0Py,x. Observe also that by Condition K, there exists a constantγ1 >0 such thatP-a.s.,Cx0 ≤γ1for allx∈ S0. By the commute time identity (see for example Proposition 10.6 of [17]) we have that

ExωB∪E]≤Exω0B] +Eω0Bx] = X

y∈S0

Cy0

Reff(∆B, x) whereReff(∆B, x)is the effective resistance between∆B andx. We have

X

y∈S0

Cy0

≤γ1(b−a+ 1) and

Reff(∆B, x)≤

x−1

X

y=∆B

ωy,y+1−1 ≤κ−1(b−a+ 1).

Thus,

ExωB∪E]≤γ1κ−1(b−a+ 1)2≤γ2(b−a)2

for some positive constant γ2. By the Chebyshev inequality, we can choose a large enough constantγ3>0in such a way that

Pxω

τB∪E>bγ3(b−a)2c

≤ ExωB∪E]

3(b−a)2c ≤ γ2(b−a)2

3(b−a)2c <1. (2.4) Let us denotes :=bγ3(b−a)2cand p:=γ2(b−a)23(b−a)2c−1. Forn ≥sdivide the time interval[0, n]into N :=bnscsubintervals of lengths. Using (2.4) and the Markov property we obtain

PxωB∪E> n]≤Pxω

X0(sj)∈ {∆/ B,∆E}, j= 1, . . . , N

≤(1−p)N

≤exp

− n

γ4(b−a)2

for some positive constantγ4. This concludes the proof of Proposition 2.2.

(8)

2.3 Estimates on the exit distribution

Let I = [a, b] ⊂ Z be a finite interval and E = (−∞, a]∪[b,+∞). We prove the following

Proposition 2.3. For allη > 0there existsM > 0such thatP-a.s., for each interval [a, b]⊂Zcontaining at least three points we have

min

x∈(a,b)Pxω[XτE ∈IM]≥1−η withIM := [a−M, a]∪[b, b+M].

Proof. Fix an arbitraryη ∈(0,1). For intervals [a, b]of length 2, there exists only one pointxin(a, b). By the Markov property we have that

Pxω[XτE∈IM] =Pxω[X1∈IM |X16=x].

This implies that

Pxω[XτE∈IM] = 1−Pxω[X1∈(−∞, a−M)∪(b+M,∞)]

Pxω[X16=x] .

Then, Condition K and (1.1) guarantee the existence of a constant M > 0 such that P-a.s.,

Pxω[XτE ∈IM]≥1−η.

For intervals[a, b]of length greater or equal to 3, let us do the following. Fix some x∈ (a, b). Letζ0 = 0and fori ≥ 1, ζi := inf{n > ζi−1 : Xn =x} with the convention inf{∅} = +∞. Since by Lemma 2.1 our random walk isP-a.s. recurrent, the sequence (ζi)i≥1isP-a.s. strictly increasing and we have by the Markov property

Pxω[XτE∈IM] =

X

i=0

Pxω[XτE∈IME∈[ζi, ζi+1)]PxωE∈[ζi, ζi+1)]

=Pxω[XτE∈IME< τx+]. (2.5) Let us defineAE:={τE< τx+}. First, we write

Pxω[XτE ∈IM |AE] = 1−Pxω[XτE∈/IM |AE]

= 1− X

y∈(−∞,a−M)∪(b+M,∞)

Pxω[XτE =y|AE]. (2.6) Then, consider the new environment ω0 obtained from ω by deleting all the conduc- tancesωy,z when bothy and z belong toE. The reversible measure on this new envi- ronmentω0is given by

Cy0 :=

Cy, ify∈(a, b), P

z /∈Eωy,z, otherwise. We define CE0 := P

y∈ECy0 and for all y ∈ E, πE(y) := Cy0/CE0 . Observe that by Con- dition K, CE0 ∈ (0,∞), P-a.s. Hence, πE is a probability measure on E. For the sake of simplicity we writePEω0 instead ofPπωE0 for the random walk onω0 starting with prob- ability πE. We can couple the random walks in the environments ω and ω0 so that Pxω0[XτE =y|AE] =Pxω[XτE =y|AE].

Now, let us find an upper bound for the term Pxω0[XτE =y |AE]withy ∈ (−∞, a− M)∪(b+M,∞). By definition ofAEwe have

Pxω0[XτE=y|AE] = Pxω0[XτE =y, τE < τx+]

Pxω0E< τx+] . (2.7)

(9)

Let us denote byΓz0,z00the set of finite paths(z0, z1, . . . , zk, z00)such thatzi∈/E∪ {z0, z00} for alli= 1, . . . , k. Letγ= (z0, z1, . . . , zk, z00)∈Γz0,z00and define

Pzω00[γ] :=Pzω00[X1=z1, . . . , Xk=zk, Xk+1=z00].

By reversibility we obtain

Pxω0[XτE =y, τE < τx+] = X

γ∈Γx,y

Pxω0[γ] = 1 Cx0

X

γ∈Γy,x

Cy0Pyω0[γ] = Cy0

Cx0 Pyω0x< τE+] and

Pxω0E< τx+] =X

z∈E

X

γ∈Γx,z

Pxω0[γ] =X

z∈E

X

γ∈Γz,x

Cz0

Cx0Pzω0[γ] = CE0 Cx0

X

z∈E

πE(z) X

γ∈Γz,x

Pzω0[γ]

= CE0

Cx0 PEω0x< τE+]. (2.8)

Thus, by (2.7) we have

Pxω0[XτE =y|AE] = Cy0Pyω0x< τE+]

CE0 PEω0x< τE+]. (2.9) To bound from below the termPEω0x < τE+]we use an electric networks argument. To this end, we will define a Markov chain on a new state space for which it will be easy to compute the effective conductance. First, we introduce a point∆Eand the state space S:= (Z\E)∪ {∆E}. Forz /∈E, we define the transition probabilities

Pz,∆E=X

u∈E

ω0z,u

Cz0 , PE,z= X

u∈E

πE(u)ωz,u0 Cu0 .

For z /∈ E and u /∈ E we set Pz,u = ω0z,u/Cz0, and, we put PE,∆E = 0. By defining C0E :=CE0 , we can easily check that the detailed balance equations are satisfied, i.e., for allz∈ S we haveCz0Pz,u=Cu0Pu,z. We have that

PEω0x< τE+] =Pω0Ex< τ+

E] = Ceff(∆E, x) C0

E

=Ceff(∆E, x)

CE0 (2.10)

whereCeff(∆E, x)is the effective conductance between∆Eandx. Observe that Ceff(∆E, x)≥x−1X

i=a

ωi,i+1−1 −1

+b−1X

i=x

ωi,i+1−1 −1

.

Using Condition E, we obtain

CE0 PEω0x< τE+]≥κ 1

x−a+ 1 b−x

. (2.11)

Then, we have to treat the termCy0Pyω0x< τE+]. By construction ofω0 Cy0Pyω0x< τE+] =Cy0 X

z∈(a,b)

pω0(y, z)Pzω0x< τE]

= X

z∈(a,b)

ω0y,zPzω0x< τE]

= X

z∈(a,b)

ωy,zPzωx< τE].

(10)

Finally, we have to estimate Pzωx < τE] for z ∈ (a, b)\ {x}. To this end, we define the following sequence of stopping times. Let Υ0 = 0 and fori ≥ 1, Υi := inf{n >

Υi−1 :Xn =z}with the conventioninf{∅}= +∞. The sequence(Υi)i≥1 is a.s. strictly increasing and we have

Pzωx< τE] =Pzωx< τEE∪{x}∈[0,Υ1)]. (2.12) Then, we have

Pzωx< τEE∪{x}∈[0,Υ1)] =Pzωx< τE, τE∪{x}∈[0,Υ1)]

PzωE∪{x}∈[0,Υ1)] ≤ Pzωx< τz+]

PzωE< τz+]∧1. (2.13) We estimatePzωE< τz+]in the following way,

PzωE< τz+] = Ceff(z, E) Cz

≥ 1 Cz

Xb−1

i=z

ω−1i,i+1−1

+z−1X

i=a

ω−1i,i+1−1

≥ˆκκ 1

b−z+ 1 z−a

. (2.14)

Now, using the Dirichlet variational principle, we obtain an upper bound forPzωx< τz+]. Suppose that the interval(x, b)6=∅and thatz∈(x, b), consider the functionhgiven by

h(u) =

( 1, ifu < x or u >2z−x,

|z−u|

z−x , ifx≤u≤z.

Hence, we have2CzPzωx< τz+]≤Φ(h). By the same reasoning as we used in order to obtain (2.3) in the proof of Lemma 2.1, we deduce that there exists a constantγ2 > 0 such that

Pzωx< τz+]≤ γ2

z−x (2.15)

forz ∈(x, b). Similarly, if we suppose that(a, x)6=∅and z∈(a, x), we obtain a bound similar to (2.15) forPzωx< τz+]. Then, we obtain that forz∈(a, b)\ {x},

Pzωx< τz+]≤ γ2

|z−x|. (2.16)

Note that we can choose γ2 in such a way that it does not depend on the size of the interval[a, b]. By (2.12), (2.13), (2.14) and (2.16) we obtain

Cy0Pyω0x< τE+]≤ X

z∈(a,b)

ωy,zγ2(z−a)(b−z) ˆ

κκ(b−a)|z−x| ∧1 .

Thus, by (2.9) and (2.11) we obtain Pxω0[XτE =y|AE]≤ 1

κ

(x−a)(b−x) b−a

X

z∈(a,b)

ωy,z

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x|∧1 ,

Pxω[XτE∈IM |AE]≥1−1 κ

X

y∈IM0

(x−a)(b−x) b−a

X

z∈(a,b)

ωy,z

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x| ∧1

(2.17) withIM0 = (−∞, a−M)∪(b+M,∞). Let us divide the set IM0 into the subintervals J1(M) = (b+M,∞)andJ2(M) = (−∞, a−M). Denote

H1(M) = X

y∈J1

(x−a)(b−x) b−a

X

z∈(a,b)

ωy,z

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x|∧1 ,

H2(M) = X

y∈J2

(x−a)(b−x) b−a

X

z∈(a,b)

ωy,z

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x|∧1 .

(11)

We have

H1(M)≤ X

y∈J1

(x−a)(b−x) b−a

n X

z∈(a,x+a2 ]

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x|ωy,z+ X

z∈(x+a2 ,b+x2 )

ωy,z

+ X

z∈[b+x2 ,b)

γ2(z−a)(b−z) ˆ

κκ(b−a)|z−x|ωy,zo .

Now, observe that

(x−a)(b−x) b−a

X

z∈(a,x+a2 ]

(z−a)(b−z)

(b−a)|z−x|ωy,z≤ X

z∈(a,x+a2 ]

(b−z)ωy,z

≤X

z<b

(b−z)ωy,z, (2.18)

(x−a)(b−x) b−a

X

z∈(x+a2 ,b+x2 )

ωy,z≤(b−x) X

z∈(x+a2 ,b+x2 )

ωy,z

≤2 X

z∈(x+a2 ,b+x2 )

(b−z)ωy,z

≤2X

z<b

(b−z)ωy,z (2.19)

and

(x−a)(b−x) b−a

X

z∈[x+b2 ,b)

(z−a)(b−z)

(b−a)|z−x|ωy,z≤2 X

z∈[x+b2 ,b)

(b−z)ωy,z

≤2X

z<b

(b−z)ωy,z. (2.20)

Putting (2.18), (2.19) and (2.20) together leads to H1(M)≤22γ2

ˆ

κκ + 1 X

y∈J1

X

z<b

(b−z)ωy,z. (2.21)

Observe that this last upper bound onH1(M)does not depend onxanymore. Now, by Condition K, for anyη >0, we can takeM1>0sufficiently large such thatP-a.s. for all u∈Zwe have

X

v>u+M1

X

w<u

ωv,w(u−w)<κη 4

2 ˆ

κκ + 1−1 .

For thisM1, we haveH1(M1)< κη/2. By symmetry, we also have thatH2(M1)< κη/2. Combining these two last results with (2.5) and (2.17) and the case of intervals of length 2 treated at the beginning of the proof, we obtain that for everyη > 0there existsM such thatP-a.s., for any interval[a, b],

min

x∈(a,b)Pxω[XτE ∈IM]≥1−η.

This concludes the proof of Proposition 2.3.

(12)

3 Proof of Theorem 1.2

In this section we prove the UCLT. LetCub(C(R+),R)be the space of bounded uni- formly continuous functionals from C(R+) into R. First, let us prove the apparently weaker statement:

Proposition 3.1. For allF∈Cub(C(R+),R), we haveP-a.s., for everyH >0,

n→∞lim sup

x∈[−H n,H

n]

Eθxω[F(Zn)]−E[F(W)]

= 0.

The difficult part of the proof of Theorem 1.2 is to show Proposition 3.1. To prove this proposition, we will introduce the notion of “good site" inZ. The set of good sites is by definition the set of sites inZfrom which we can guarantee that the random walk converges uniformly to Brownian motion. Due to the ergodicity of the random environ- ment, we will then prove that starting a random walk from any site in[−H√

n, H√ n], with high probability, it will meet a close good site quickly enough to derive a uniform CLT. This part will be done in two steps, introducing the intermediate concept of “nice site". More precisely, the sequence of steps we will follow in this section to prove Proposition 3.1 is the following:

• In Definition 3.2, we formally define the notion of “good sites”.

• In Definition 3.3, we introduce the notion of “nice sites". Heuristically,xis a nice site if for someδ >0andh >0, the range of the random walk starting fromxuntil time hn is greater than δh1/2

nwith high probability, so that the random walk cannot stay “too close” to its starting location (it holds that good sites are nice).

• Right after Definition 3.3, we show that any intervalI∈[−2H√ n,2H√

n]of length nν withν ∈ (1/(2 +β),12)(here β is from Condition K) must contain at least one

“nice site".

• In Lemma 3.4 we show that, starting from a site x ∈ [−H√ n, H√

n], with high probability the random walk meets a nice site at a distance at mostnµ in time at mostnwithµ∈(ν,12).

• In Lemma 3.5 we show that, starting from a nice sitex∈[−(3/2)H√

n,(3/2)H√ n], the random walk meets with high probability a good site at a distance less than h√

nbefore timehn.

• We combine Lemmas 3.4 and 3.5 to obtain that, starting from anyx∈[−H√ n, H√

n]

the random walk meets a good site at a distance less thanh√

n before timehn. This is the statement of Lemma 3.6.

• Proposition 3.1 then follows from Lemma 3.6, since we know essentially that the random walk will quickly reach a nearby good site, and from this good site the convergence properties are “good” by definition.

From Proposition 3.1, we obtain Theorem 1.2 in the following way. In Proposi- tion 3.7, we first show that Proposition 3.1 implies a corresponding statement in which we substitute uniformly continuous functionals F by open sets of C(R+). Then, in Proposition 3.8, we use the separability of the spaceC(R+)to show that we can inter- change the terms “for any open setG" and “P-a.s." in (ii) of Proposition 3.7. Then, we use standard arguments as in the proof of the Portmanteau theorem of [4] to conclude the proof of Theorem 1.2.

Now, fixF ∈Cub(C(R+),R). Our first goal is to prove Proposition 3.1, that is,P-a.s., for everyε, H >˜ 0,

sup

x∈[−H n,H

n]

Eθxω[F(Zn)]−E[F(W)]

≤ε˜ (3.1)

(13)

for all large enough n. To start, we need to write some definitions and prove some intermediate results. From now on, we suppose that σ = 1 (otherwise replace X by σ−1X).

Denote

R+n(m) = max

s≤m(Xn+s−Xn), Rn(m) = min

s≤m(Xn+s−Xn), Rn(m) =R+n(m)−Rn(m), and

R+t(u) = max

s≤u(Wt+s−Wt), Rt(u) = min

s≤u(Wt+s−Wt), Rt(u) =R+t(u)−Rt(u).

Letdbe the distance on the spaceCR+ defined by d(x, y) =

X

n=1

2−n+1minn 1, sup

s∈[0,n]

|x(s)−y(s)|o .

Now, for any givenε >0, we define δε:= maxn

δ1∈(0,1] :P[R0(1/2)< δ1] +P[R1/2(1/2)< δ1]

+P[R1(1/2)< δ1] +P[R+0(1)< δ1] +P[R0(1)< δ1]≤ ε 2 o

(3.2) and

hε:= maxn

h1∈(0,1] :Ph sup

s≤h1

|W(s)|> εi +Ph

sup

s≤h1

d(θsW, W)> εi

≤ ε 2 o

. (3.3) Observe thatδεandhεare positive for allε >0and decrease to 0 asε→0. For (3.3), the positivity ofhεforε >0follows from the properties of the modulus of continuity of Brownian motion (see e.g. Theorem 1.12 of [21]).

Definition 3.2. For a given realizationω of the environment andN ∈ N, we say that x∈Zis(ε, N)-good, if

(i) minn

n≥1 :

Eω[F(Zm)]−E[F(W)]

≤ε, for allm≥no

≤N; (ii) Pxω

h

Rk(hεm) ≥ δεh1/2ε

√mfor allk ≤ hεm, R±0(hεm) ≥ δεh1/2ε

√mi

≥ 1−ε,for all m≥N;

(iii) Pθxω

h sups≤h

ε|Zm(s)| ≤ε,sups≤h

εd(θsZm, Zm)≤εi

≥1−ε, for allm≥N.

For any givenε >0, it follows from Theorem 1.1, (3.2) and (3.3) that for anyε0 >0 there existsN such that

P[0is(ε, N)-good]>1−ε0.

Then, by the Ergodic Theorem,P-a.s., for allnlarge enough, it holds that {x∈[−2H√

n,2H√

n] :xis not(ε, N)-good}

<5ε0H√

n. (3.4)

Next, we need the following

(14)

Definition 3.3. We say that a sitexis(ε, n)-nice, if Pxω

h

R0(hεn)≥δεh1/2ε √ ni

≥1−3ε.

In particular, note that, if for someN≤na sitexis(ε, N)-good, then it is(ε, n)-nice.

Now, fix someν ∈(2+β1 ,12)(so thatν(2 +β)−1 >0), whereβ is from Condition K.

Observe that by Condition K there existsγ1>0such that for any starting pointx∈Z Pxω[|Xk+1−Xk|< nν for allk≤hεn]≥1−γ1n−(ν(2+β)−1). (3.5) We argue by contradiction that,P-a.s., there existsn1=n1(ω, ε)such that any interval of length at leastnν contains a(ε, n)-nice site forn > n1. For this, chooseε0 >0such that5ε0H < δεh1/2ε and letnlarge enough such thatnν <5ε0H√

nand (3.4) hold. Let I⊂[−2H√

n,2H√

n]be an interval of lengthnνsuch that it does not contain any(ε, n)- nice site. Observe that, by (3.4), there exists a(ε, N)-good sitex0 such that|x0−y|<

δεh1/2ε

√nfor ally∈I. Note that, by (3.5) and (ii) of Definition 3.2,

Pxω0[there existsk≤hεnsuch thatXk∈I]≥1−ε−γ1n−(ν(2+β)−1)

(the particle crossesIwithout jumping over it entirely), hence Pxω0[there existsk≤hεnsuch thatRk(hεn)< δεh1/2ε

n]≥3ε(1−ε−γ1n−(ν(2+β)−1))

≥2ε

if n is large enough. But this contradicts the fact that x0 is (ε, N)-good. So, we see that,P-a.s., any intervalI ⊂[−2H√

n,2H√

n] of lengthnν should contain at least one (ε, n)-nice site fornlarge enough.

Letµ∈(ν,12). In the next Lemma, we show that starting from a sitex∈[−H√ n, H√

n], with high probability the random walk will meet a(ε, n)-nice site at a distance at mostnµ in time at mostn.

Forx∈Zandn∈N, let us denote byPnl(x)the largesty≤xsuch thatyis a(ε, n)- nice site and byPnr(x)the smallesty ≥xsuch thatyis a(ε, n)-nice site. Furthermore, we denote byNε,nthe set of(ε, n)-nice sites inZ.

Lemma 3.4. For anyε1>0andε >0, we haveP-a.s., for all sufficiently largen, for all x∈[−H√

n, H√ n],

Pxω h

τNε,n≤n, max

j≤τNε,n

|Xj−X0| ≤nµi

≥1−ε1. (3.6)

Proof. First, suppose thatxis not(ε, n)-nice, otherwise the proof of (3.6) is trivial. For some integerM1>0, define the intervals

IM1(x) := [Pnl(x)−M1,Pnr(x) +M1].

Let us also define the following increasing sequence of stopping times: ξ0 := 0and for i≥1,

ξi:= inf

k > ξi−1:Xk ∈/ (Pnl(Xξi−1),Pnr(Xξi−1)) +M1. Then, we define the events

Ai:=

there existsk∈(ξi, ξi+1]such thatXk ∈ {Pnl(Xξi),Pnr(Xξi)} , Bi:=n

Xξi+1−M1 ∈IM1(Xξi), max

(k,l)∈[ξi+1−M1i+1]2

|Xk−Xl| ≤M1nνo

(15)

fori ≥0. Now, fix temporarily ε˜∈(0,12). By Proposition 2.3, we can chooseM1 large enough in such a way that

min

y∈(Pnl(x),Pnr(x))

Pyω[Xξ1−M1 ∈IM1(x)]≥1−ε.˜ (3.7) Note that by Condition E and Proposition 2.3 and (3.7) we havePω[A0]≥ 12κM1. By the Markov property, we obtain for some integerL >0,

Pxω hL−1\

i=0

Acii

=Pxω[Ac0]. . .Pxω[AcL−1|Ac0. . . AcL−2]≤ 1−1

M1L

. (3.8)

Since the event{max(k,l)∈[ξ1−M11]2|Xk−Xl|> M1nν}implies that there is at least one jump of sizenν during the time interval[ξ1−M1, ξ1], using Condition K, (1.1), (3.7) and the Markov property, we have thatPxω[B0c]≤ε˜+γ1n−ν(2+β)for some positive constant γ1. We obtain by the Markov property,

Pxω hL−1[

i=0

Bcii

≤L(˜ε+γ1n−ν(2+β)). (3.9) Observe that each event{maxj∈[ξi−M1i]|Xj−Xξi−M1−1| > (M1+ 1)nν+M1}, i ≥1, implies either{|Xξi−M1−Xξi−M1−1|> nν+M1}(which implies that the first jump after timeξi−M1−1is out of the intervalIM1i−M1−1)) or{max(k,l)∈[ξi−M1i]2|Xk−Xl|>

M1nν}. Then, combining (3.8) and (3.9), we have Pxω

h

τNε,n ∈[0, ξL], max

j≤τNε,n

|Xj−X0| ≤L((M1+ 1)nν+M1)i

≥1− 1−1

M1L

−L˜ε−Lγ1n−ν(2+β). (3.10) Now, letµ0>0and denoteGi:={ξi−ξi−1≤n2ν+µ0+M1}for1≤i≤L. We have

PxωL≤L(n2ν+µ0+M1)]≥Pxω[G1, G2, . . . , GL]

=Pxω[G1]Pxω[G2|G1]. . .Pxω[GL|G1. . . GL−1]. (3.11) By Proposition 2.2 and the fact that any interval of lengthnν in[−2H√

n,2H√

n]should contain at least one(ε, n)-nice site, we obtain

Pxω[G1]≥1−exp

−nµ0 K1

for sufficiently largen. By (3.11) and the Markov property we have PxωL≤L(n2ν+µ0+M1)]≥h

1−exp

−nµ0 K1

iL

≥1−Lexp

−nµ0 K1

. (3.12) Now, chooseLsufficiently large so that

1−1

M1L

≤ ε1

3 (3.13)

andε˜sufficiently small such that

L˜ε≤ε1

3 . (3.14)

Then, combining (3.10), (3.12), (3.13) and (3.14), we obtain Pxω

Nε,n≤L(n2ν+µ0+M1), max

j≤τNε,n

|Xj−X0| ≤L((M1+ 1)nν+M1)i

≥1−2ε1

3 −Lγ1n−ν(2+β)−Lexp

−nµ0 K1

.

参照

関連したドキュメント

Eskandani, “Stability of a mixed additive and cubic functional equation in quasi- Banach spaces,” Journal of Mathematical Analysis and Applications, vol.. Eshaghi Gordji, “Stability

Finally, we give an example to show how the generalized zeta function can be applied to graphs to distinguish non-isomorphic graphs with the same Ihara-Selberg zeta

An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality

Keywords: Convex order ; Fréchet distribution ; Median ; Mittag-Leffler distribution ; Mittag- Leffler function ; Stable distribution ; Stochastic order.. AMS MSC 2010: Primary 60E05

Let X be a smooth projective variety defined over an algebraically closed field k of positive characteristic.. By our assumption the image of f contains

Inside this class, we identify a new subclass of Liouvillian integrable systems, under suitable conditions such Liouvillian integrable systems can have at most one limit cycle, and

Next, we prove bounds for the dimensions of p-adic MLV-spaces in Section 3, assuming results in Section 4, and make a conjecture about a special element in the motivic Galois group

It turns out that the symbol which is defined in a probabilistic way coincides with the analytic (in the sense of pseudo-differential operators) symbol for the class of Feller