• 検索結果がありません。

Random walks on disordered media and their scaling limits

N/A
N/A
Protected

Academic year: 2022

シェア "Random walks on disordered media and their scaling limits"

Copied!
93
0
0

読み込み中.... (全文を見る)

全文

(1)

Random walks on disordered media and their scaling limits

Takashi Kumagai Department of Mathematics Kyoto University, Kyoto 606-8502, Japan

Email: kumagai@math.kyoto-u.ac.jp (June 27, 2010)

(Version for St. Flour Lectures)

Abstract

The main theme of these lectures is to analyze heat conduction on disordered media such as fractals and percolation clusters using both probabilistic and analytic methods, and to study the scaling limits of Markov chains on the media.

The problem of random walk on a percolation cluster ‘the ant in the labyrinth’ has received much attention both in the physics and the mathematics literature. In 1986, H. Kesten showed an anomalous behavior of a random walk on a percolation cluster at critical probability for trees and forZ2. (To be precise, the critical percolation cluster is finite, so the random walk is considered on an incipient infinite cluster (IIC), namely a critical percolation cluster conditioned to be infinite.) Partly motivated by this work, analysis and di↵usion processes on fractals have been developed since the late eighties. As a result, various new methods have been produced to estimate heat kernels on disordered media, and these turn out to be useful to establish quenched estimates on random media. Recently, it has been proved that random walks on IICs are sub-di↵usive onZd whendis high enough, on trees, and on the spread-out oriented percolation ford >6.

Throughout the lectures, I will survey the above mentioned developments in a compact way.

In the first part of the lectures, I will summarize some classical and non-classical estimates for heat kernels, and discuss stability of the estimates under perturbations of operators and spaces.

Here Nash inequalities and equivalent inequalities will play a central role. In the latter part of the lectures, I will give various examples of disordered media and obtain heat kernel estimates for Markov chains on them. In some models, I will also discuss scaling limits of the Markov chains.

Examples of disordered media include fractals, percolation clusters, random conductance models and random graphs.

Research partially supported by the Grant-in-Aid for Scientific Research (B) 22340017.

(2)

Contents

0 Plan of the lectures and remark 3

1 Weighted graphs and the associated Markov chains 4

1.1 Weighted graphs . . . 4

1.2 Harmonic functions and e↵ective resistances . . . 9

1.3 Trace of weighted graphs . . . 16

2 Heat kernel upper bounds (The Nash inequality) 17 2.1 The Nash inequality . . . 17

2.2 The Faber-Krahn, Sobolev and isoperimetric inequalities . . . 20

3 Heat kernel estimates using e↵ective resistance 25 3.1 Green density killed on a finite set . . . 27

3.2 Green density on a general set . . . 30

3.3 General heat kernel estimates . . . 32

3.4 Strongly recurrent case . . . 33

3.5 Applications to fractal graphs . . . 36

4 Heat kernel estimates for random weighted graphs 38 4.1 Random walk on a random graph . . . 39

4.2 The IIC and the Alexander-Orbach conjecture . . . 42

5 The Alexander-Orbach conjecture holds when two-point functions behave nicely 43 5.1 The model and main theorems . . . 43

5.2 Proof of Proposition 5.4 . . . 46

5.3 Proof of Proposition 5.3 i) . . . 49

5.4 Proof of Proposition 5.3 ii) . . . 51

6 Further results for random walk on IIC 52 6.1 Random walk on IIC for critical percolation on trees . . . 52

6.2 Sketch of the Proof of Theorem 6.2 . . . 54

6.3 Random walk on IIC for the critical oriented percolation cluster in Zd (d >6) . . . 56

6.4 Below critical dimension . . . 58

6.5 Random walk on random walk traces and on the Erd¨os-R´enyi random graphs . . . 59

7 Random conductance model 61 7.1 Overview . . . 62

7.2 Percolation estimates . . . 67

7.3 Proof of some heat kernel estimates . . . 67

7.4 Corrector and quenched invariance principle . . . 68

7.5 Construction of the corrector . . . 72

(3)

7.6 Proof of Theorem 7.15 . . . 75

7.7 Proof of Proposition 7.16 . . . 77

7.7.1 Case 1 . . . 77

7.7.2 Case 2 . . . 82

7.8 End of the proof of quenched invariance principle . . . 84

0 Plan of the lectures and remark

A rough plan of lectures at St. Flour is as follows.

Lecture 1–3: In the first lecture, I will discuss general potential theory for symmetric Markov chains on weighted graphs (Section 1). Then I will show various equivalent conditions for the heat kernel upper bounds (the Nash inequality (Section 2)). In the third lecture, I will use e↵ective resistance to estimate Green functions, exit times from balls etc.. On-diagonal heat kernel bounds are also obtained using the e↵ective resistance (Section 3).

Lecture 4–612: I will discuss random walk on an incipient infinite cluster (IIC) for a critical perco- lation. I will give some sufficient condition for the sharp on-diagonal heat kernel bounds for random walk on random graphs (Section 4). I then prove the Alexander-Orbach conjecture for IICs when two-point functions behave nicely, especially for IICs of high dimensional critical bond percolations on Zd (Section 5). I also discuss heat kernel bounds and scaling limits on related random models (Section 6).

Lecture 612–8: The last 2 (and half) lectures will be devoted to the quenched invariance principle for the random conductance model onZd(Section 7). Put i.i.d. conductanceµe on each bond inZd and consider the Markov chain associated with the (random) weighted graph. I consider two cases, namely 0 µe c P-a.e. and c µe <1 P-a.e.. Although the behavior of heat kernels are quite di↵erent, for both cases the scaling limit is Brownian motion in general. I will discuss some technical details about correctors of the Markov chains, which play a key role to prove the invariance principle.

This is the version for St. Flour Lectures. There are several ingredients (which I was planning to include) missing in this version. Especially, I could not explain enough about techniques on heat kernel estimates; for example o↵-diagonal heat kernel estimates, isoperimetric profiles, relations to Harnack inequalities etc. are either missing or mentioned only briefly. (Because of that, I could not give proof to most of the heat kernel estimates in Section 7.) This was because I was much busier than I had expected while preparing for the lectures. However, even if I could have included them, most likely there was not enough time to discuss them during the 8 lectures. Anyway, my plan is to revise these notes and include the missing ingredients in the version for publication from Springer.

I referred many papers and books during the preparation of the lecture notes. Especially, I owe a lot to the lecture notes by Barlow [10] and by Coulhon [40] for Section 1–2. Section 3–4 (and part of Section 6) are mainly from [18, 87]. Section 5 is mainly from the beautiful paper by Kozma and Nachmias [86] (with some simplification in [101]). In Section 7, I follow the arguments of the papers by Barlow, Biskup, Deuschel, Mathieu and their co-authors [16, 30, 31, 32, 90, 91].

(4)

1 Weighted graphs and the associated Markov chains

In this section, we discuss general potential theory for symmetric (reversible) Markov chains on weighted graphs. Note that there are many nice books and lecture notes that treat potential theory and/or Markov chains on graphs, for example [6, 10, 51, 59, 88, 107, 109]. While writing this section, we are largely influenced by the lecture notes by Barlow [10].

1.1 Weighted graphs

Let X be a finite or countably infinite set, and E is a subset of {{x, y}:x, y2X, x6=y}. A graph is a pair (X, E). For x, y2 X, we write x ⇠y if{x, y} 2E. A sequence x0, x2,· · · , xn is called a path with lengthnifxi 2X fori= 0,1,2,· · · , nandxj ⇠xj+1 forj= 0,1,2,· · ·, n 1. Forx6=y, define d(x, y) to be the length of the shortest path from x to y. If there is no such path, we set d(x, y) =1 and we setd(x, x) = 0. d(·,·) is a metric on X and it is called a graph distance. (X, E) is connected if d(x, y) <1 for all x, y 2X, and it is locally finite if ]{y :{x, y} 2E}< 1 for all x 2 X. Throughout the lectures, we will consider connected locally finite graphs (except when we consider the trace of them in Subsection 1.3).

Assume that the graph (X, E) is endowed with a weight (conductance)µxy, which is a symmetric nonnegative function on X⇥X such that µxy > 0 if and only if x ⇠ y. We call the pair (X, µ) a weighted graph.

Letµx =µ(x) =P

y2Xµxy and define a measureµonX by settingµ(A) =P

x2Aµx forA⇢X.

Also, we define B(x, r) ={y 2X:d(x, y)< r}for each x2X and r 1.

Definition 1.1 We say that (X, µ) has controlled weights (or (X, µ) satisfies p0-condition) if there exists p0 >0 such that

µxy µx

p0 8x⇠y.

If (X, µ) has controlled weights, then clearly]{y2X :x⇠y} p01.

Once the weighted graph (X, µ) is given, we can define the corresponding quadratic form, Markov chain and the discrete Laplace operator.

Quadratic form We define a quadratic form on (X, µ) as follows.

H2(X, µ) =H2 = {f :X !R:E(f, f) = 1 2

X

x,y2X xy

(f(x) f(y))2µxy <1},

E(f, g) = 1 2

X

x,y2X xy

(f(x) f(y))(g(x) g(y))µxy 8f, g2H2.

Physically,E(f, f) is the energy of the electrical network for an (electric) potentialf.

Since the graph is connected, one can easily see that E(f, f) = 0 if and only if f is a constant function. We fix a base point 02X and define

kfk2H2 =E(f, f) +f(0)2 8f 2H2.

(5)

Note that

E(f, f) = 1 2

X

x⇠y

(f(x) f(y))2µxy X

x

X

y

(f(x)2+f(y)2xy = 2kfk22 8f 2L2, (1.1) wherekfk2 is the L2-norm of f. So L2 ⇢H2. We give basic facts in the next lemma.

Lemma 1.2 (i) Convergence in H2 implies the pointwise convergence.

(ii) H2 is a Hilbert space.

Proof. (i) Suppose fn!f inH2 and let gn =fn f. ThenE(gn, gn) +gn(0)2 ! 0 so gn(0)!0.

For any x 2 X, there is a sequence {xi}li=0 ⇢ X such that x0 = 0, xl = x and xi ⇠ xi+1 for i= 0,1,· · ·, l 1. Then

|gn(x) gn(0)|2 l

l 1

X

i=0

|gn(xi) gn(xi+1)|2 2l(minl 1

i=0 µxixi+1) 1E(gn, gn)!0 (1.2) asn! 1so we have gn(x)!0.

(ii) Assume that {fn}n⇢H2 is a Cauchy sequence in H2. Thenfn(0) is a Cauchy sequence inRso converges. Thus, similarly to (1.2) fn converges pointwise tof, say. Now using Fatou’s lemma, we have kfn fk2H2 lim infmkfn fmk2H2, so thatkfn fk2H2 !0.

Markov chain Let Y ={Yn}be a Markov chain on X whose transition probabilities are given by P(Yn+1 =y|Yn=x) = µxy

µx =:P(x, y) 8x, y2X.

We writePxwhen the initial distribution ofY is concentrated onx(i.e. Y0=x,P-a.s.). (P(x, y))x,y2X

is the transition matrix for Y. Y is called a simple random walk when µxy = 1 whenever x⇠y. Y is µ-symmetric since for each x, y2X,

µxP(x, y) =µxyyxyP(y, x).

We define the heat kernelofY by

pn(x, y) =Px(Yn=y)/µy 8x, y2X.

Using the Markov property, we can easily show the Chapman-Kolmogorov equation:

pn+m(x, y) =X

z

pn(x, z)pm(z, y)µz, 8x, y2X. (1.3) Using this and the factp1(x, y) =µxy/(µxµy) =p1(y, x), one can verify the following inductively

pn(x, y) =pn(y, x), 8x, y2X.

(6)

For n 1, let

Pnf(x) =X

y

pn(x, y)f(y)µy =X

y

Px(Yn=y)f(y) =Ex[f(Yn)], 8f :X!R.

We sometimes consider a continuous time Markov chain {Yt}t 0 w.r.t. µ which is defined as follows: each particle stays at a point, say x for (independent) exponential time with parameter 1, and then jumps to another point, sayywith probability P(x, y). The heat kernel for the continuous time Markov chain can be expressed as follows.

pt(x, y) =Px(Yt=y)/µy =X1

n=0

e ttn

n!pn(x, y), 8x, y2X.

Discrete Laplace operator Forf :X!R, the discrete Laplace operator is defined by Lf(x) =X

y

P(x, y)f(y) f(x) = 1 µx

X

y

(f(y) f(x))µxy =Ex[f(Y1)] f(x) = (P1 I)f(x). (1.4) Note that according to the Ohm’s law ‘I =V /R’, P

y(f(y) f(x))µxy is the total flux flowing into x, given the potentialf.

Definition 1.3 Let A⇢X. A function f :X!R is harmonic on A if Lf(x) = 0, 8x2A.

h is sub-harmonic (resp. super-harmonic) on A if Lf(x) 0 (resp. Lf(x)0) for x2A.

Lf(x) = 0 means that the total flux flowing into x is 0 for the given a potential f. This is the behavior of the currents in a network called Kircho↵’s (first) law.

ForA⇢X, we define the (exterior) boundary ofA by

@A={x2Ac :9z2A such thatz⇠x}.

Proposition 1.4 (Maximum principle) Let A be a connected subset of X and h :A[@A! R be sub-harmonic on A. If the maximum ofhoverA[@Ais attained inA, thenhis constant on A[@A.

Proof. Let x0 2A be the point where h attains the maximum and let H ={z2 A[@A:h(z) = h(x0)}. Ify2H\A, then sinceh(y) h(x) for all x2A[@A, we have

0µyLh(y) =X

y

(h(x) h(y))µxy 0.

Thus,h(x) =h(y) (i.e. x2H) for allx⇠y. SinceA is connected, this implies H=A[@A.

We can prove the minimum principle for a super-harmonic functionh by applying the maximum principle to h.

Forf, g2L2, denote their L2-inner product as (f, g), namely (f, g) =P

xf(x)g(x)µx.

(7)

Lemma 1.5 (i) L:H2 !L2 andkLfk222kfk2H2. (ii) For f 2H2 andg2L2, we have ( Lf, g) =E(f, g).

(iii) L is a self-adjoint operator onL2(X, µ) and the following holds:

( Lf, g) = (f, Lg) =E(f, g), 8f, g2L2. (1.5) Proof. (i) Using Schwarz’s inequality, we have

kLfk22 = X

x

1 µx(X

y

(f(y) f(x))µxy)2

 X

x

1 µx(X

y

(f(y) f(x))2µxy)(X

y

µxy) = 2E(f, f)2kfk2H2. (ii) Using (i), both sides of the equality are well-defined. Further, using Schwarz’s inequality,

X

x,y

xy(f(y) f(x))g(x)| (X

x,y

µxy(f(y) f(x))2)1/2(X

x,y

µxyg(y)2)1/2 =E(f, f)1/2kgk2 <1. So we can use Fubini’s theorem, and we have

( Lf, g) = X

x

(X

y

µxy(f(y) f(x)))g(x) = 1 2

X

x

X

y

µxy(f(y) f(x))(g(y) g(x)) =E(f, g).

(iii) We can prove (f, Lg) =E(f, g) similarly and obtain (1.5).

(1.5) is the discrete Gauss-Green formula.

Lemma 1.6 Set pxn(·) =pn(x,·). Then, the following hold for all x, y2X.

pn+m(x, y) = (pxn, pym), P1pxn(y) =pn+1(x, y), (1.6) Lpxn(y) =pn+1(x, y) pn(x, y), E(pxn, pym) =pn+m(x, y) pn+m+1(x, y), (1.7) p2n(x, y)p

p2n(x, x)p2n(y, y). (1.8)

Proof. The two equations in (1.6) are due to the Chapman-Kolmogorov equation (1.3). The first equation in (1.7) is then clear sinceL=P1 I. The last equation can be obtained by these equations and (1.5). Using (1.6) and the Schwarz inequality, we have

p2n(x, y)2 = (pxn, pyn)2 (pxn, pxn)(pyn, pyn) =p2n(x, x)p2n(y, y), which gives (1.8).

It can be easily shown that (E,L2) is a regular Dirichlet form on L2(X, µ) (c.f. [55]). Then the corresponding Hunt process is the continuous time Markov chain {Yt}t 0 w.r.t. µ and the corresponding self-adjoint operator onL2 isL in (1.4).

(8)

Remark 1.7 Note that {Yt}t 0 has the transition probability P(x, y) = µxyx and it waits at x for an exponential time with mean 1 for each x 2 X. Since the ‘speed’ of {Yt}t 0 is independent of the location, it is sometimes called constant speed random walk (CSRW for short). We can also consider a continuous time Markov chain with the same transition probability P(x, y) and wait at x for an exponential time with mean µx1 for each x2X. This Markov chain is called variable speed random walk (VSRW for short). We will discuss VSRW in Section 7. The corresponding discrete Laplace operator is

LVf(x) =X

y

(f(y) f(x))µxy. (1.9)

For each f, g that have finite support, we have

E(f, g) = (LVf, g) = (LCf, g)µ,

where ⌫ is a measure on X such that ⌫(A) =|A| for all A⇢ X. So VSRW is the Markov process associated with the Dirichlet form (E,F) on L2(X,⌫) and CSRW is the Markov process associated with the Dirichlet form (E,F) on L2(X, µ). VSRW is a time changed process of CSRW and vice versa.

We now introduce the notion of rough isometry.

Definition 1.8 Let (X1, µ1),(X2, µ2) be weighted graphs that have controlled weights.

(i) A map T :X1 !X2 is called a rough isometry if the following holds.

There exist constants c1, c2, c3 >0 such that

c11d1(x, y) c2 d2(T(x), T(y))c1d1(x, y) +c2 8x, y2X1, (1.10) d2(T(X1), y0)c2 8y0 2X2, (1.11) c31µ1(x)µ2(T(x))cµ1(x) 8x2X1, (1.12) where di(·,·) is the the graph distance of (Xi, µi), for i= 1,2.

(ii) (X1, µ1),(X2, µ2) are said to be rough isometric if there is a rough isometry between them.

It is easy to see that rough isometry is an equivalence relation. One can easily prove that Z2 and the triangular lattice, the hexagon lattice are all roughly isometric. It can be proved that Z1 andZ2 are not roughly isometric.

The notion of rough isometry was first introduced by M. Kanai ([73, 74]). As this work was mainly concerned with Riemannian manifolds, definition of rough isometry included only (1.10), (1.11).

The definition equivalent to Definition 1.8 is given in [42] (see also [65]). Note that rough isometry corresponds to quasi-isometry in the field of geometric group theory.

While discussing various properties of Markov chains/Laplace operators, it is important to think about their ‘stability’. In the following, we introduce two types of stability.

Definition 1.9 (i) We say a property is stable under bounded perturbation if whenever (X, µ) sat- isfies the property and (X, µ0) satisfies c 1µxy µ0xy  cµxy for all x, y 2 X, then (X, µ0) satisfies

(9)

the property.

(ii) We say a property is stable under rough isometry if whenever (X, µ) satisfies the property and (X0, µ0) is rough isometric to (X, µ), then (X0, µ0) satisfies the property.

If a propertyP is stable under rough isometry, then it is clearly stable under bounded perturbation.

It is known that the following properties of weighted graphs are stable under rough isometry.

(i) Transience and recurrence

(ii) The Nash inequality, i.e. pn(x, y)c1n for all n 1, x2X (for some↵ >0) (iii) Parabolic Harnack inequality

We will see (i) later in this section and (ii) in Section 2. One of the important open problem is to show if the elliptic Harnack inequality is stable under these perturbations or not.

Definition 1.10 (X, µ) has the Liouville property if there is no bounded non-constant harmonic functions. (X, µ) has the strong Liouville property if there is no positive non-constant harmonic functions.

It is known that both Liouville and strong Liouville properties are not stable under bounded pertur- bation (see [89]).

1.2 Harmonic functions and e↵ective resistances For A⇢X, define

A= inf{n 0 :Yn2A}, +A= inf{n >0 :Yn2A}, ⌧A= inf{n 0 :Yn2/A}. ForA⇢X and f :A!R, consider the followingDirichlet problem.

( Lv(x) = 0 8x2Ac,

v|A=f. (1.13)

Proposition 1.11 Assume thatf :A!R is bounded and set '(x) =Ex[f(Y A) : A<1].

(i) ' is a solution of (1.13).

(ii) If Px( A<1) = 1 for allx2X, then 'is the unique solution of (1.13).

Proof. (i) '|A=f is clear. Forx2Ac, using the Markov property ofY, we have '(x) =X

y

P(x, y)'(y),

(10)

soL'(x) = 0.

(ii) Let '0 be another solution and let Hn='(Yn) '0(Yn). Then Hn is a bounded martingale up to A, so using the optional stopping theorem, we have

'(x) '0(x) =ExH0 =ExH A =Ex['(Y A) '0(Y A)] = 0 since A<1a.s. and '(x) ='0(x) forx2A.

Remark 1.12 (i) In particular, we see that 'is the unique solution of (1.13) whenAc is finite. In this case, here is another proof of the uniqueness of the solution of (1.13): let u(x) ='(x) '0(x), then u|A = 0 and Lu(x) = 0 for x 2 Ac. So, noting u 2 L2 and using Lemma 1.5, E(u, u) = ( Lu, u) = 0 which implies thatu is constant on X (so it is 0 since u|A= 0).

(ii) If hA(x) := Px( A = 1) >0 for some x 2X, then the function '+ hA is also a solution of (1.13) for all 2R, so the uniqueness of the Dirichlet problem fails.

ForA, B⇢X such thatA\B=;, define

Re↵(A, B) 1= inf{E(f, f) :f 2H2, f|A= 1, f|B = 0}. (1.14) (We defineRe↵(A, B) =1when the right hand side is 0.) We callRe↵(A, B) thee↵ective resistance betweenAandB. It is easy to see thatRe↵(A, B) =Re↵(B, A). IfA⇢A0,B ⇢B0withA0\B0 =;, thenRe↵(A0, B0)Re↵(A, B).

Take a bond e={x, y}, x⇠ y in a weighted graph (X, µ). We say cutting the bond ewhen we take the conductance µxy to be 0, and we say shorting the bond ewhen we identify x=y and take the conductanceµxy to be 1. Clearly, shorting decreases the e↵ective resistance (shorting law), and cutting increases the e↵ective resistance (cutting law).

The following proposition shows that among feasible potentials whose voltage is 1 onAand 0 on B, it is a harmonic function on (A[B)c that minimizes the energy.

Proposition 1.13 (i) The right hand side of (1.14) is attained by a unique minimizer'.

(ii) ' in (1) is a solution of the following Dirichlet problem ( L'(x) = 0 8x2X\(A[B),

'|A= 1, '|B = 0. (1.15)

Proof. (i) We fix a based point x0 2 B and recall that H2 is a Hilbert space with kfkH2 = E(f, f) +f(x0)2 (Lemma 1.2 (ii)). Since V :={f 2H2:f|A= 1, f|B= 0}is a closed convex subset of H2, a general theorem shows that V has a unique minimizer fork · kH2 (which is equal to E(·,·) on V).

(ii) Letg be a function onX whose support is finite and is contained inX\(A[B). Then, for any 2R,'+ g2 V, so E('+ g,' + g) E(','). ThusE(', g) = 0. Applying Lemma 1.5(ii), we have (L', g) = 0. For eachx2X\(A[B), by choosingg(z) = x(z), we obtainL'(x)µx= 0.

(11)

As we mentioned in Remark 1.12 (ii), we do not have uniqueness of the Dirichlet problem in general. So in the following of this section, we will assume that Ac is finite in order to guarantee uniqueness of the Dirichlet problem.

The next theorem gives a probabilistic interpretation of the e↵ective resistance.

Theorem 1.14 If Ac is finite, then for each x02Ac,

Re↵(x0, A) 1x0Px0( A< x+0). (1.16) Proof. Let v(x) =Px( A < x0). Then, by Proposition 1.11, v is the unique solution of Dirichlet problem withv(x0) = 0, v|A= 1. By Proposition 1.13 and Lemma 1.5 (noting that 1 v2L2),

Re↵(x0, A) 1=E(v, v) =E( v,1 v) = (Lv,1 v) =Lv(x0x0 =Ex0[v(Y1)]µx0. By definition ofv, one can seeEx0[v(Y1)] =Px0( A< x+0) so the result follows.

Similarly, if Ac is finite one can prove

Re↵(B, A) 1= X

x2B

µxPx( A< +B).

Note that by Ohm’s law, the right hand side of (1.16) is the current flowing fromx0 toAc. The following lemma is useful and will be used later in Proposition 3.18.

Lemma 1.15 Let A, B⇢X and assume that both Ac, Bc are finite. Then the following holds.

Re↵(x, A[B)

Re↵(x, A) 1 Re↵(x, B) 1 Px( A< B) Re↵(x, A[B)

Re↵(x, A) , 8x /2A[B.

Proof. Using the strong Markov property, we have

Px( A< B) = Px( A< B, A[B < x+) +Px( A< B, A[B> +x)

= Px( A< B, A[B < x+) +Px( A[B> +x)Px( A< B).

So

Px( A< B) = Px( A< B, A[B < +x)

Px( A[B< x+)  Px( A< +x) Px( A[B < x+). Using (1.16), the upper bound is obtained. For the lower bound,

Px( A< B, A[B< +x) Px( A< x+< B) Px( A< +x) Px( B< x+), so using (1.16) again, the proof is complete.

As we see in the proof, we only need to assume thatAc is finite for the upper bound.

(12)

Now let (X, µ) be an infinite weighted graph. Let{An}1n=1be a family of finite sets such thatAn⇢ An+1 forn2Nand [n 1An=X. Letx02A1. By the short law,Re↵(x0, Acn)Re↵(x0, Acn+1), so the following limit exists.

Re↵(x0) := lim

n!1Re↵(x0, Acn). (1.17)

Further, the limit Re↵(x0) is independent of the choice of the sequence {An} mentioned above.

(Indeed, if {Bn} is another such family, then for each n there exists Nn such that An ⇢ BNn, so limn!1Re↵(x0, Acn)  limn!1Re↵(x0, Bnc). By changing the role of An and Bn, we have the opposite inequality.)

Theorem 1.16 Let (X, µ) be an infinite weighted graph. For each x2X, the following holds Px( x+=1) = (µxRe↵(x)) 1.

Proof. By Theorem 1.14, we have

Px( Acn< x+) = (µxRe↵(x, Acn) 1) 1. Taking n! 1 and using (1.17), we have the desired equality.

Definition 1.17 We say a Markov chain is recurrent at x 2 X if Px( x+ = 1) = 0. We say a Markov chain is transient at x2X if Px( x+=1)>0.

The following is well-known for irreducible Markov chains (so in particular it holds for Markov chains corresponding to weighted graphs). See, for example [97].

Proposition 1.18 (1) {Yn}n is recurrent at x 2 X if and only if m := P1

n=0Px(Yn = x) = 1. Further, m 1=Px( x+=1).

(2) If {Yn}n is recurrent (resp. transient) at some x2X, then it is recurrent (resp. transient) for all x2X.

(3) {Yn}n is recurrent if and only if Px({Y hitsy infinitely often}) = 1 for all x, y 2X. {Yn}n is transient if and only if Px({Y hits y finitely often}) = 1 for allx, y2X.

From Theorem 1.16 and Proposition 1.18, we have the following.

{Yn} is transient (resp. recurrent) , Re↵(x)<1 (resp. Re↵(x) =1),9x2X (1.18) , Re↵(x)<1 (resp. Re↵(x) =1),8x2X.

Example 1.19 Consider Z2 with weight 1 on each nearest neighbor bond. Let@Bn={(x, y)2Z2: either |x|or |y| isn}. By shorting @Bn for alln2N, one can obtain

Re↵(0) X1

n=0

1

4(2n+ 1) =1. So the simple random walk on Z2 is recurrent.

(13)

Let us recall the following fact.

Theorem 1.20 (P´olya 1921) Simple random walk on Zd is recurrent if d = 1,2 and transient if d 3.

The combinatorial proof of this theorem is well-known. For example, ford= 1, by counting the total number of paths of length 2nthat moves both right and left ntimes,

P0(Y2n= 0) = 2 2n 2n n

!

= (2n)!

22nn!n! ⇠(⇡n) 1/2, where Stirling’s formula is used in the end. Thus

m=X1

n=0

P0(Yn= 0)⇠ X1 n=1

(⇡n) 1/2+ 1 =1, so{Yn} is recurrent.

This argument is not robust. For example, if one changes the weight onZd so that c1 µxy  c2 for x ⇠ y, one cannot apply the argument at all. The advantage of the characterization of transience/recurrence using the e↵ective resistance is that one can make a robust argument. Indeed, by (1.18) we can see that transience/recurrence is stable under bounded perturbation. This is because, if c1µ0xy  µxy  c2µ0xy for all x, y 2 X, then c1Re↵(x)  R0e↵(x)  c2Re↵(x). We can further prove that transience/recurrence is stable under rough isometry.

Finally in this subsection, we will give more equivalence condition for the transience and discuss some decomposition of H2. Let H02 be the closure of C0(X) in H2, where C0(X) is the space of compactly supported function on X. For a finite set B⇢X, define the capacity of B by

Cap (B) = inf{E(f, f) :f 2H02, f|B = 1}. We first give a lemma.

Lemma 1.21 If a sequence of non-negative functionsvn 2H2, n2N satisfies limn!1vn(x) = 1 for all x2X and limn!1E(vn, vn) = 0, then

nlim!1ku (u^vn)kH2 = 0, 8u2H2, u 0.

Proof. Let un = u^vn and define Un = {x 2 X : u(x) > vn(x)}. By the assumption, for each N 2 N, there exists N0 = N0(N) such that Un ⇢ B(0, N)c for all n N0. For A ⇢ X, denote EA(u) = 12P

x,y2A(u(x) u(y))2µxy. Since EUnc(u un) = 0, we have E(u un, u un)  2·1

2 X

x2Un

X

y:yx

⇣u(x) un(x) (u(y) un(y))⌘2

µxy

 2EB(0,N 1)c(u un)2⇣

EB(0,N 1)c(u) +EB(0,N 1)c(un)⌘

(1.19)

(14)

for all n N0. Asun= (u+vn |u vn|)/2, we have EB(0,N 1)c(un)  c1

EB(0,N 1)c(u) +EB(0,N 1)c(vn) +EB(0,N 1)c(|u vn|)⌘

 c2

EB(0,N 1)c(u) +EB(0,N 1)c(vn)⌘ . Thus, together with (1.19), we have

E(u un, u un)c3

⇣EB(0,N 1)c(u) +EB(0,N 1)c(vn)⌘

c3

⇣EB(0,N 1)c(u) +E(vn, vn)⌘ . Sinceu2H2,EB(0,N 1)c(u)!0 asN ! 1and by the assumption,E(vn, vn)!0 asn! 1. So we obtain E(u un, u un)!0 as n! 1. By the assumption, it is clear that u un !0 pointwise, so we obtainku unkH2 !0.

We say that a quadratic form (E,F) is Markovian if u 2 F and v = (0_u)^1, then v 2 F and E(v, v)  E(u, u). It is easy to see that quadratic forms determined by weighted graphs are Markovian.

Proposition 1.22 The following are equivalent.

(i) The Markov chain corresponding to (X, µ) is transient.

(ii) 12/ H02

(iii) Cap({x})>0 for some x2X.

(iii)0 Cap({x})>0 for allx2X.

(iv) H02 6=H2

(v) There exists a non-negative super-harmonic function which is not a constant function.

(vi) For each x2X, there exists c1(x)>0 such that

|f(x)|2 c1(x)E(f, f) 8f 2C0(X). (1.20) Proof. For fixed x2X, define'(z) =Pz( x <1). We first show the following: '2H02 and

E(',') = ( L',1{x}) =Re↵(x) 1= Cap ({x}). (1.21) Indeed, let {An}1n=1 be a family of finite sets such that An ⇢ An+1 for n 2 N, x 2 A1, and [n 1An = X. Then Re↵(x, Acn) 1 # Re↵(x) 1. Let 'n(z) = Pz( x < ⌧An). Using Lemma 1.5 (ii), and noting 'n2C0(X), we have, for mn,

E('m, 'n) = ('m, L'n) = (1{x}, L'n) =E('n, 'n) =Re↵(x, Acn) 1. (1.22) This implies

E('m 'n, 'm 'n) =Re↵(x, Acm) 1 Re↵(x, Acn) 1.

Hence {'m} is aE-Cauchy sequence. Noting that 'n !' pointwise, we see that'n !'in H2 as well and ' 2 H02. Taking n =m and n ! 1 in (1.22), we obtain (1.21) except the last equality.

To prove the last equality of (1.21), take any f 2 H02 with f(x) = 1. Then g := f '2 H02 and

(15)

g(x) = 0. Let gn 2 C0(X) with gn ! g in H02. Then, by Lemma 1.5 (ii), E(', gn) = ( L', gn).

Noting that 'is harmonic except at x, we see thatL'2C0(X). so, letting n! 1, we have E(', g) = ( L', g) = L'(x)g(x)µx= 0.

Thus,

E(f, f) =E('+g,' +g) =E(',') +E(g, g) E(','),

which means that ' is the unique minimizer in the definition of Cap ({x}). So the last equality of (1.21) is obtained.

Given (1.21), we now prove the equivalence.

(i) =)(iii)0: This is a direct consequence of (1.18) and (1.21).

(iii) () (ii) () (iii)0: This is easy. Indeed, Cap ({x}) = 0 if and only if there is f 2 H02 with f(x) = 1 andE(f, f) = 0, that isf is identically 1.

(iii)0 =)(vi): Letf 2C0(X)⇢H02 withf(x)6= 0, and defineg=f /f(x). Then Cap ({x}) E(g, g) =E(f, f)/f(x)2.

So, lettingc1(x) = 1/Cap ({x})>0, we obtain (vi).

(vi) =) (i): As before, let 'n(z) = Pz( x < ⌧An). Then by (1.20), E('n, 'n) c1(x) 1. So, using the fact 'n!' inH2 and (1.21), Re↵(x) 1 =E(',') = limnE('n, 'n) c1(x) 1. This means the transience by (1.18).

(ii) ()(iv): (ii) =) (iv) is clear since 1 2H2, so we will prove the opposite direction. Suppose 1 2H02. Then there exists {fn}n ⇢C0(X) such that k1 fnkH2 < n 2. Since E is Markovian, we have k1 fn|H2 k1 (fn_0)^1kH2, so without loss of generality we may assume fn 0. Let vn = nfn 0. Then limnvn(x) = 1 for all x 2X and E(vn, vn) = n2E(fn, fn) n 2 ! 0 so by Lemma 1.21, ku (u^vn)kH2 !0 for all u2 H2 with u 0. Since u^vn 2C0(X), this implies u 2 H02. For general u 2H2, we can decompose it into u+ u where u+, u 0 are in H2. So applying the above, we have u+, u 2H02 and concludeu2H02.

(i) =) (v): If the corresponding Markov chain is transient, then (z) = Pz( x+ < 1) is the non- constant super-harmonic function.

(i) (= (v): Suppose the corresponding Markov chain {Yn}n is recurrent. For a super-harmonic function 0, Mn = (Yn) 0 is a supermartingale, so it converges Px-a.s. Let M1 be the limiting random variable. Since the set {n2N:Yn=y} is unbounded Px-a.s. for ally2X (due to the recurrence), we havePx( (y) =M1) = 1 for all y2X, so is constant.

Remark 1.23 (v) =) (i) implies that if the Markov chain corresponding to (X, µ) is recurrent, then it has the strong Liouville property.

ForA, B which are subspaces of H2, we writeA B ={f +g :f 2A, g 2B} ifE(f, g) = 0 for all f 2Aand g2B.

As we see above, the Markov chain corresponding to (X, µ) is recurrent if and only ifH2 =H02. When the Markov chain is transient, we have the following decomposition ofH2, which is called the Royden decomposition (see [107, Theorem 3.69]).

(16)

Proposition 1.24 Suppose that the Markov chain corresponding to(X, µ) is transient. Then H2 =H H02,

where H:={h2H2 :h is a harmonic functions on X.}. Further the decomposition is unique.

Proof. For eachf 2H2, letaf = infh2H2

0 E(f h, f h). Then, similarly to the proof of Proposition 1.13, we can show that there is a unique minimizer vf 2 H02 such that af = E(f vf, f vf), E(f vf, g) = 0 for allg2H02, and in particularf vf is harmonic onX. For the uniqueness of the decomposition, suppose f =u+v =u0+v0 where u, u0 2 H and v, v0 2H02. Then, w:= u u0 = v0 v2 H \H02, soE(w, w) = 0, which implies w is constant. Sincew2H02 and the Markov chain is transient, by Proposition 1.22 we havew⌘0.

1.3 Trace of weighted graphs

Finally in this section, we briefly mention the trace of weighted graphs, which will be used in Section 3 and Section 7. Note that there is a general theory on traces for Dirichlet forms (see [55]). Also note that a trace to infinite subset ofX may not satisfy locally finiteness, but one can consider quadratic forms on them similarly.

Proposition 1.25 (Trace of the weighted graph) Let V ⇢ X be a non-void set such that P( V <

1) = 1and let f be a function on V. Then there exists a uniqueu2H2 which attains the following infimum:

inf{E(v, v) :v2H2, v|V =f}. (1.23) Moreover, the map f 7!u=:HVf is a linear map and there exist weights {µˆxy}x,y2V such that the corresponding quadratic form EˆV(·,·) satisfies the following:

V(f, f) =E(HVf, HVf) 8f :V !R.

Proof. The first part can be proved similarly to Proposition 1.11 and Proposition 1.13. It is clear that HV(cf) = cHVf, so we will show HV(f1 +f2) = HV(f1) +HV(f2). Let ' = HV(f1 +f2), 'i = HV(fi) for i = 1,2, and for f :V ! R, define Ef : X ! R by (Ef)|V =f and Ef(x) = 0 when x 2 Vc. As in the proof of Proposition 1.13 (ii), we see that E(HVf, g) = 0 whenever Supp g⇢Vc. So

E('1+'2, '1+'2) =E('1+'2, Ef1+Ef2) =E('1+'2, ') =E(E(f1+f2), ') =E(',').

Using the uniqueness of (1.23), we obtain '1 +'2 = '. This establishes the linearity of HV. Set ˆE(f, f) = E(HVf, HVf). Clearly, ˆE is a non-negative definite symmetric bilinear form and Eˆ(f, f) = 0 if any only iff is a constant function. So, there exists {axy}x,y2V with axy =ayx such that ˆE(f, f) = 12P

x,y2V axy(f(x) f(y))2.

(17)

Next, we show that ˆE is Markovian. Indeed, writing ¯u = (0_u)^1 for a function u, since HVu|V = ¯u, we have

Eˆ(¯u,u) =¯ E(HVu, H¯ Vu)¯  E(HVu, HVu) E(HVu, HVu) = ˆE(u, u), 8u:V !R,

where the fact that E is Markovian is used in the second inequality. Now take p, q 2V with p6=q arbitrary, and consider a functionhsuch thath(p) = 1, h(q) = ↵ <0 andh(z) = 0 forz2V\{p, q}. Then, there exist c1, c2 such that

Eˆ(h, h) = apq(h(p) h(q))2+c1h(p)2+c2h(q)2=apq(1 +↵)2+c1+c22 Eˆ(¯h,¯h) =apq(¯h(p) ¯h(q))2+c1¯h(p)2+c2h(q)¯ 2 =apq+c1.

So (apq+c2)↵2+ 2apq↵ 0. Since this holds for all ↵ >0, we haveapq 0. Putting ˆµpq =apq for each p, q2V withp6=q, we have ˆEV = ˆE, that is ˆE is associated with the weighted graph (V,µ).ˆ

We call the induced weights{µˆxy}x,y2V as thetrace of {µxy}x,y2X toV. From this proposition, we see that forx, y2V,Re↵(x, y) =RVe↵(x, y) whereRVe↵(·,·) is the e↵ective resistance for (V,µ).ˆ

2 Heat kernel upper bounds (The Nash inequality)

In this section, we will consider various equivalent inequalities to the Nash-type heat kernel upper bound, i.e. pt(x, y)  c1t ✓/2 for some ✓ > 0. We would prefer to discuss them under a general framework including weighted graphs. However, some arguments here are rather sketchy to apply for the general framework. (The whole arguments are fine for weighted graphs, so readers may only consider them.) This section is strongly motivated by Coulhon’s paper [40].

LetX be a locally compact separable metric space and µ be a Radon measure on X such that µ(B) > 0 for any non-void ball. (E,F) is called a Dirichlet form on L2(X, µ) if it is a symmetric Markovian closed bilinear form on L2. It is well-known that given a Dirichlet form, there is a corresponding symmetric strongly continuous Markovian semigroup {Pt}t 0 on L2(X, µ) (see [55, Section 1.3, 1.4]). Here Markovian means ifu2L2 satisfies 0u1µ-a.s., then 0Ptu1µ-a.s.

for all t 0. We denote the corresponding non-negative definiteL2-generator by L.

We denote the inner product of L2 by (·,·) and for p 1 denote kfkp for the Lp-norm of f 2L2(X, µ). For each ↵ >0, define

E(·,·) =E(·,·) +↵(·,·).

(E1,F) is then a Hilbert space.

2.1 The Nash inequality We first give a preliminary lemma.

Lemma 2.1 (i) kPtfk1  kfk1 for all f 2L1\L2.

(ii) For f 2L2, defineu(t) = (Ptf, Ptf). Then u0(t) = 2E(Ptf, Ptf).

(iii) For f 2 F and t 0, exp( E(f, f)t/kfk22) kPtfk2/kfk2.

(18)

Proof. (i) We first show that if 0f 2L2, then 0Ptf. Indeed, if we letfn=f·1f 1([0,n]), then fn!f inL2. Since 0fnn, the Markovian property of{Pt} implies that 0Ptfnn. Taking n ! 1, we obtain 0  Ptf. So we have Pt|f| |Ptf|, since |f|  f  |f|. Using this and the Markovian property, we have for allf 2L2\L1 and all Borel setA⇢X,

(|Ptf|,1A)(Pt|f|,1A) = (|f|, Pt1A) kfk1. Hence we have Ptf 2L1 andkPtfk1  kfk1.

(ii) SincePtf 2Dom(L), we have u(t+h) u(t)

h = 1

h(Pt+hf+Ptf, Pt+hf Ptf) = (Pt+hf+Ptf,(Ph I)Ptf

h )

h#0

! 2(Ptf,LPtf) = 2E(Ptf, Ptf).

Hence u0(t) = 2E(Ptf, Ptf).

(iii) We will prove the inequality for f 2 Dom(L); then one can obtain the result for f 2 F by approximation. Let L = R1

0 dE be the spectral decomposition of L. Then Pt = eLt = R1

0 e tdE and kfk22=R1

0 (dE f, f). Since 7!e 2 t is convex, by Jensen’s inequality, exp⇣

2Z 1

0

t(dE f, f) kfk22

⌘ Z 1

0

e 2 t(dE f, f) kfk22

= (P2tf, f) kfk22

= kPtfk22

kfk22

. Taking the square root in each term, we obtain the desired inequality.

Remark 2.2 An alternative proof of (iii) is to use the logarithmic convexity ofkPtfk22. Indeed, kP(t+s)/2fk22 = (Pt+sf, f) = (Ptf, Psf) kPtfk2kPsfk2, 8s, t >0

so kPtfk22 is logarithmic convex. Thus, t7! d

dtlogkPtfk22= dtd(kPtfk22) kPtfk22

= 2E(Ptf, Ptf) kPtfk22

(2.1) is non-decreasing. (The last equality is due to Lemma 2.1(ii).) The right hand side of (2.1) is

2E(f,f)

kfk22 when t= 0, so integrating (2.1) over [0, t], we have logkPtfk22

kfk22 =Z t

0

d

dslogkPsfk22ds 2tE(f, f) kfk22 . The following is easy to see. (Note that we only need the first assertion.)

Lemma 2.3 Let (E,F)be a symmetric closed bilinear form on L2(X, µ), and let{Pt}t 0, Lbe the corresponding semigroup and the self-adjoint operator respectively. Then, for each >0, (E ,F) is also a symmetric closed bilinear form and the corresponding semigroup and the self-adjoint operator are {e tPt}t 0, I L, respectively. Further, if (E,F) is the regular Dirichlet form on L2(X, µ) and the corresponding Hunt process is {Yt}t 0, then (E ,F) is also the regular Dirichlet form and the corresponding hunt process is {Yt^⇣}t 0 where ⇣ is the independent exponential random variable with parameter . (⇣ is the killing time; i.e. the process goes to the cemetery point at ⇣.)

(19)

The next theorem was proved by Carlen-Kusuoka-Stroock ([38]), where the original idea of the proof of (i))(ii) was due to Nash [96].

Theorem 2.4 (The Nash inequality, [38]) The following are equivalent for any 0.

(i) There exist c1, ✓ > 0 such that for all f 2 F \L1,

kfk2+4/✓2 c1(E(f, f) + kfk22)kfk4/✓1 . (2.2) (ii) For all t > 0, Pt(L1) ⇢L1 and it is a bounded operator. Moreover, there exist c2, ✓ > 0 such that

kPtk1!1c2ett ✓/2, 8t >0. (2.3) Here kPtk1!1 is an operator norm of Pt:L1!L1.

When = 0, we cite (2.2) as (N) and (2.3) as (U C).

Proof. First, note that using Lemma 2.1, it is enough to prove the theorem when = 0.

(i) ) (ii) : Let f 2 L2 \L1 with kfk1 = 1 and u(t) := (Ptf, Ptf)2. Then, by Lemma 2.1 (ii), u0(t) = 2E(Ptf, Ptf). Now by (i) and Lemma 2.1 (i),

2u(t)1+2/✓ c1( u0(t))kPtfk4/✓1  c1u0(t),

so u0(t)  c2u(t)1+2/✓. Set v(t) = u(t) 2/✓, then we obtain v0(t) 2c2/✓. Since limt#0v(t) = u(0) 2/✓ =kfk24/✓ >0, it follows that v(t) 2c2t/✓. This means u(t) c3t ✓/2, whencekPtfk2  c3t ✓/4kfk1 for all f 2 L2 \ L1, which implies kPtk1!2  c3t ✓/4. Since Pt = Pt/2 Pt/2 and kPt/2k1!2 =kPt/2k2!1, we obtain (ii).

(ii))(i) : Letf 2L2\L1 withkfk1 = 1. Using (ii) and Lemma 2.1 (iii), we have exp( 2E(f, f)t

kfk22

) c4t ✓/2 kfk22

.

Rewriting, we have E(f, f)/kfk22 (2t) 1log(t✓/2kfk22) (2t) 1A, where A = logc4 and we may take A > 0. Set (x) = supt>0{2tx log(xt✓/2) Ax/(2t)}. By elementary computations, we have (x) c5x1+2/✓. So

E(f, f) (kfk22) c5kfk2(1+2/✓)2 =c5kfk2+4/✓. Since this holds for allf 2L2\L1 with kfk1= 1, we obtain (i).

Remark 2.5 (1) When one is only concerned about t 1 (for example on graphs), we have the following equivalence under the assumption of kPtk1!1 C for allt 0 (C is independent oft).

(i) (N) with = 0 holds for E(f, f) kfk21. (ii) (U C) with = 0 holds for t 1.

参照

関連したドキュメント

The approach based on the strangeness index includes un- determined solution components but requires a number of constant rank conditions, whereas the approach based on

A number of previous papers have obtained heat kernel upper bounds for jump processes under similar conditions – see in particular [3, 5, 13]... Moreover, by the proof of [2,

In Section 3 the extended Rapcs´ ak system with curvature condition is considered in the n-dimensional generic case, when the eigenvalues of the Jacobi curvature tensor Φ are

In section 3 all mathematical notations are stated and global in time existence results are established in the two following cases: the confined case with sharp-diffuse

(2.17) To prove this theorem we extend the bounds proved in [2] for the continuous time simple random walk on (Γ, µ) to the slightly more general random walks X and Y defined

Theorem 2.11. Let A and B be two random matrix ensembles which are asymptotically free. In contrast to the first order case, we have now to run over two disjoint cycles in the

[Tetali 1991], and characterises the weights (and therefore the Dirichlet form) uniquely [Kigami 1995]... RESISTANCE METRIC, e.g. equipped with conductances) graph with vertex set V

Le Gall [10] showed in particular that scaling limits of random quadrangulations are homeomorphic to the Brownian map introduced by Marckert &amp; Mokkadem [13], and Le Gall