• 検索結果がありません。

SeanO’Rourke AlexanderSoshnikov Productsofindependentnon-Hermitianrandommatrices

N/A
N/A
Protected

Academic year: 2022

シェア "SeanO’Rourke AlexanderSoshnikov Productsofindependentnon-Hermitianrandommatrices"

Copied!
27
0
0

読み込み中.... (全文を見る)

全文

(1)

El e c t ro nic

Journ a l of

Pr

ob a b il i t y

Vol. 16 (2011), Paper no. 81, pages 2219–2245.

Journal URL

http://www.math.washington.edu/~ejpecp/

Products of independent non-Hermitian random matrices

Sean O’Rourke Alexander Soshnikov

Abstract

We consider the product of a finite number of non-Hermitian random matrices with i.i.d. cen- tered entries of growing size. We assume that the entries have a finite moment of order bigger than two. We show that the empirical spectral distribution of the properly normalized product converges, almost surely, to a non-random, rotationally invariant distribution with compact sup- port in the complex plane. The limiting distribution is a power of the circular law.

Key words:Random matrices, Circular law.

AMS 2010 Subject Classification:Primary 60B20.

Submitted to EJP on January 28, 2011, final version accepted November 3, 2011.

Department of Mathematics, University of California, Davis, One Shields Avenue, Davis, CA 95616-8633. Email:

sdorourk@math.ucdavis.edu. Research was supported in part by NSF grants VIGRE DMS-0636297 and DMS-1007558

Department of Mathematics, University of California, Davis, One Shields Avenue, Davis, CA 95616-8633. Email:

soshniko@math.ucdavis.edu. Research was supported in part by NSF grant DMS-1007558

(2)

1 Introduction and Formulation of Results

Many important results in random matrix theory pertain to Hermitian random matrices. Two pow- erful tools used in this area are the moment method and the Stieltjes transform. Unfortunately, these two techniques are not suitable for dealing with non-Hermitian random matrices,[6].

1.1 The Circular Law

One of the fundamental results in the study of non-Hermitian random matrices is the circular law.

We begin by defining the empirical spectral distribution (ESD).

Definition 1. Let X be a matrix of order n and letλ1, . . . ,λn be the eigenvalues of X. Then the empirical spectral distribution(ESD)µX ofX is defined as

µX(z, ¯z) = 1 n#

kn: Re λk

≤Re(z); Im λk

≤Im(z) .

Letξbe a complex random variable with finite non-zero varianceσ2and letNnbe a random matrix of order nwith entires being i.i.d. copies of ξ. We say that the circular law holds for ξif, with probability 1, the ESDµ 1

σp

nNn of σ1p

nNnconverges (uniformly) to the uniform distribution over the unit disk asntends to infinity.

The circular law was conjectured in the 1950’s as a non-Hermitian counterpart to Wigner’s semi- circle law. The circular law was first shown by Mehta in 1967[22]whenξis complex Gaussian.

Mehta relied upon the joint density of the eigenvalues which was discovered by Ginibre [10]two years earlier.

Building on the work of Girko[11], Bai proved the circular law under the conditions thatξhas finite sixth moment and that the joint distribution of the real and imaginary parts ofξhas bounded density, [3]. In [6], the sixth moment assumption was weakened toE|ξ|2 for any specified η >0, but the bounded density assumption still remained. Götze and Tikhomirov ([15]) proved the circular law in the case of i.i.d. sub-Gaussian matrix entries. Pan and Zhou proved the circular law for any distributionξwith finite fourth moment[25]by building on[15]and utilizing the work of Rudelson and Vershynin in[27]. In an important development, Götze and Tikhomirov showed in[14]that the expected spectral distributionEµNnconverges to the uniform distribution over the unit disk asn tends to infinity assuming that supjkE|(Nn)jk|2φ((Nn)jk)<∞, whereφ(x) = (ln(1+|x|))19, η >

0. In[28], Tao and Vu proved the circular law assuming a bounded(2+η)thmoment, for any fixed η >0. Finally, Tao and Vu have been able to remove the extraηin the moment condition. Namely, they proved the circular law in[29]assuming only that the second moment is bounded.

1.2 Main Results

In this paper, we study the ESD of the product

X(n)=X1(n)X2(n)· · ·Xm(n)

ofm independentn×nnon-Hermitian random matrices as ntends to infinity. Burda, Janik, and Waclaw[8]studied the mathematical expectation of the limiting ESD, limn→∞Eµ(n)X , in the case

(3)

that the entries of the matrices are Gaussian. Here we extend their results by proving the almost sure convergence of the ESD,µ(Xn), for a class of non-Gaussian random matrices. Namely, we require that the entries of Xi(n), i = 1, . . . ,m, are i.i.d. random variables with a finite moment of order 2+η, η >0.

Theorem 2. Fix m>1and letξbe a complex random variable with variance 1such thatRe(ξ)and Im(ξ) are independent each with mean zero andE|ξ|2<for some η >0. Let X1(n), . . . ,Xm(n) be independent random matrices of order n where the entries of X(jn) are i.i.d. copies of σjpξ

n for some collection of positive constants σ1, . . . ,σm. Then the ESDµ(n)X of X(n) = X1(n)X2(n)· · ·X(n)m converges, with probability1, as n→ ∞to the distribution whose density is given by

ρ(z, ¯z) =

¨ 1

σm2|z|m22 for|z| ≤σ,

0 for|z|> σ, (1)

whereσ=σ1· · ·σm.

Remark 3. The almost sure convergence ofµ(n)X implies the convergence ofEµ(n)X as well.

Remark 4. We refer the reader to[4]for bounds on powers of a square random matrix with i.i.d.

entries. See also[1],[2],[9], [5], [7], and [24]for some other results on the spectral properties of products of random matrices.

2 Notation and Setup

The proof of Theorem 2 is divided into two parts and presented in Sections 3 and 4.

We note that without loss of generality, we may assume σ1 = σ2 = · · · = σm = 1. Indeed, the spectrum for arbitrary σ1, . . . ,σm can be obtained by a trivial rescaling. Following Burda, Janik, and Waclaw in[8], we letY(n)be a(mn)×(mn)matrix defined as

Y(n)=

0 X1(n) 0

0 0 X2(n) 0

... ...

0 0 Xm(n)1

X(n)m 0

. (2)

Section 4 will be devoted to proving that the ESD ofY(n)obeys the circular law asntends to infinity.

This statement is presented in the following Lemma.

Lemma 5 (Y(n) obeys the circular law). The ESDµY(n) of Y(n) converges, with probability 1, to the uniform distribution over the unit disk as n→ ∞.

3 Proof of Theorem 2

With Lemma 5 above, we are ready to prove Theorem 2.

(4)

Proof of Theorems 2. Using the definition ofY(n)in (2), we can compute

€Y(n)Šm

=

Y1 0

Y2 ...

0 Ym

 ,

whereYk=Xk(n)X(n)k+1· · ·X(n)m X1(n)· · ·Xk−1(n) for 1≤km. Notice that eachYkhas the same eigenval- ues asX(n). Letλ1, . . . ,λndenote the eigenvalues ofX(n)and letη1, . . . ,ηmndenote the eigenvalues ofY(n). Then it follows that eachλkis an eigenvalue of€

Y(n)Šm

with multiplicitym.

Let f :C→Cbe a continuous, bounded function. Then we have Z

C

f(z)dµX(n)(z, ¯z) =1 n

Xn

k=1

fk) = 1 mn

Xmn

k=1

fmk) = Z

C

f(zm)dµY(n)(z, ¯z).

By Lemma 5,

Z

C

f(zm)dµY(n)(z, ¯z)−→ 1 π

Z

D

f(zm)dzd¯z a.s.

as n→ ∞where Ddenotes the unit disk in the complex plane. Thus, by the change of variables z7→zmand ¯z7→z¯m we can write

1 π

Z

D

f(zm)dzd¯z= m π

Z

D

f(z) 1

m2|z|m22dzd¯z= 1

Z

D

f(z)|z|m22dzd¯z.

where the factor ofmout front of the integral corresponds to the fact that the transformation maps the complex planemtimes onto itself.

Therefore, we have shown that for all continuous, bounded functions f, Z

C

f(z)dµX(n)(z, ¯z)−→ 1

Z

D

f(z)|z|m2−2dzd¯z a.s.

asn→ ∞and the proof is complete.

4 Proof of Lemma 5

In order to prove that the ESD ofY(n)obeys the circular law, we follow the work of Bai in[3], Bai and Silverstein in[6], and use the results developed by Tao and Vu in[28]. To do so, we introduce the following notation. Letµn denoted the ESD ofY(n). That is,

µn(x,y) = 1 mn#

kmn: Re(λk)≤x; Im(λk)≤ y whereλ1, . . . ,λmnare the eigenvalues ofY(n).

An important idea in the proof is to analyze the Stieltjes transformationsn:C→Cofµn defined by sn(z) = 1

mn Xmn

k=1

1 λkz =

Z

C

1

x+i yzdµn(x,y).

(5)

Sincesn(z) is analytic everywhere except the poles, the real part determines the eigenvalues. Let z=s+i t. Then we can write

Re(sn(z)) = 1 mn

Xmn

k=1

Re(λk)−s

kz|2

=− 1 2mn

Xmn

k=1

∂sln|λkz|2

=−1 2

∂s Z

0

lnn(dx,z)

whereνn(·,z)is the ESD of the Hermitian matrixHn= (Y(n)z I)(Y(n)z I). This reduces the task to controlling the distributionsνn.

The main difficulties arise from the two poles of the log function, at∞and 0. We will need to use the bounds developed in[3] and[28] to control the largest singular value and the least singular value ofY(n)z I.

A version of the following lemma was first presented by Girko,[11]. We present a slightly refined version by Bai and Silverstein,[6].

Lemma 6. For any uv6=0, we have cn(u,v) =

Z Z

eiux+i v yµn(dx, dy)

=u2+v2 4iuπ

Z Z

∂s

–Z

0

lnn(dx,z)

™

eius+i v tdtds, (3)

where z=s+it.

We note that the singular values ofY(n) are the union of the singular values ofXk(n)for 1≤kn.

Thus, under the assumptions of Theorem 2, the ESD ofY(n)Y(n)converges to the Marchenko-Pastur Law (see [20] and[6, Theorem 3.7]). Thus by Lemma 8 it follows that, with probability 1, the family of distributionsµn is tight. To prove the circular law we will show that the right-hand side of (3) converges toc(u,v), its counterpart generated by the circular law, for alluv6=0. Several steps of the proof will follow closely the work of Bai in[3]and Bai and Silverstein in[6]. We present an outline of the proof as follows.

1. We reduce the range of integration to a finite rectangle in Section 4.2. We will show that the proof reduces to showing that, for every largeA>0 and smallε >0,

Z Z

T

–

∂s Z

0

lnn(dx,z)

™

eius+i v tdsdt

→ Z Z

T

–

∂s Z

0

ln(dx,z)

™

eius+i v tdsdt

where T = {(s,t) : |s| ≤ A,|t| ≤A3,|p

s2+t2−1| ≥ ε} andν(x,z) is the limiting spectral distribution of the sequence of matricesHn= (Y(n)z I)(Y(n)z I).

(6)

2. We characterize the limiting spectrumν(·,z)ofνn(·,z).

3. We establish a convergence rate ofνn(·,z)toν(·,z)uniformly in every bounded region ofz.

4. Finally, we show that for a suitably defined sequenceεn, with probability 1, lim sup

n→∞

Z Z

T

Z

εn

lnxn(dx,z)ν(dx,z))

=0

and

n→∞lim Z εn

0

lnn(dx,z) =0.

4.1 Notation

In this section, we introduce some notation that we will use throughout the paper.

First, we will drop the superscript(n)from the matrices Y(n), X(n), X1(n), . . . ,X(n)m and simply write Y,X,X1, . . . ,Xm.

We write R = Yz I where I is the identity matrix and z = s+i t ∈C. We will continue to let Hn= (Yz I)(Yz I) =RRand haveνn(x,z)denote the empirical spectral distribution ofHn for each fixedz.

For a(mn)×(mn)matrixA, there arem2blocks each consisting of an×nmatrix. We letAa bdenote then×nmatrix in positiona,bwhere 1≤a,bm.Aa,b;i,j then refers to the element(Aa b)i j where 1≤i,jn.

Finally,C will be used as some positive constant that may change from line to line.

4.2 Integral Range Reduction

To establish Lemma 5, we need to find the limiting counterpart to gn(s,t) =

∂s Z

0

lnn(dx,z).

We begin by presenting the following lemmas.

Lemma 7(Bai-Silverstein[6]). For all uv6=0, we have c(u,v) = 1

π Z Z

x2+y21

eiux+i v ydxdy= u2+v2 4iuπ

Z –Z

g(s,t)eius+i v tdt

™ ds, where

g(s,t) =

¨ 2s

s2+t2, if s2+t2>1 2s, otherwise

(7)

Lemma 8(Horn-Johnson[17]). Letλj andηj denote the eigenvalues and singular values of an n×n matrix A, respectively. Then for any kn,

Xk

j=1

j|2≤ Xk

j=1

η2

ifηj is arranged in descending order.

Lemma 9(Bai-Silverstein[6]). For any uv6=0and A>2, we have

Z

|s|≥A

Z

−∞

gn(s,t)eius+i v tdtds

≤ 4π

|v|e12|v|A+ 2π n|v|

mn

X

k=1

I

k| ≥ A 2

and

Z

|s|≤A

Z

t≥A3

gn(s,t)eius+i v tdtds

≤ 8A

A2−1+4πA n

mn

X

k=1

I(|λk|>A)

whereλ1, . . . ,λmnare the eigenvalues of Y . Furthermore, if the function gn(s,t)is replaced by g(s,t), the two inequalities above hold without the second terms.

Now we note that under the assumptions of Theorem 2 and by Lemma 8 and the law of large numbers we have

1 n

Xmn

k=1

I(|λk|>A)≤ 1

nA2Tr(YY)−→ m

A2 a.s.

Therefore, the right-hand sides of the inequalities in Lemma 9 can be made arbitrarily small by making Alarge enough. The same is true when gn(s,t) is replaced by g(s,t). Our task is then reduced to showing

Z

|s|≤A

Z

|t|≤A3

[gn(s,t)g(s,t)]eius+i v tdsdt−→0.

We define the sets

T

(s,t):|s| ≤A,|t| ≤A3 and||z| −1| ≥ε© and

T1={(s,t):||z−1|< ε}, wherez=s+i t.

Lemma 10(Bai-Silverstein[6]). For all fixed A and0< ε <1, Z Z

T1

|gn(s,t)|dsdt≤32p

ε. (4)

Furthermore, if the function gn(s,t)is replaced by g(s,t), the inequality above holds.

Since the right-hand side of (4) can be made arbitrarily small by choosing ε small, our task is reduced to showing

Z Z

T

[gn(s,t)g(s,t)]eius+i v tdsdt−→0 a.s. (5)

(8)

4.3 Characterization of the Circular Law

In this section, we study the convergence of the distributions νn(x,z) to a limiting distribution ν(x,z) as well as discuss properties of the limiting distributionν(x,z). We begin with a standard truncation argument which can be found, for example, in[6].

4.3.1 Truncation

LetYbandYe be the(mn)×(mn)matrices with entries Yba,b;i,j=Ya,b;i,jI(p

n|Ya,b;i,j| ≤nδ)−EYa,b;i,jI(p

n|Ya,b;i,j| ≤nδ) and

Yea,b;i,j= Yba,b;i,j q

nE bYa,b;i,j

2

where δ > 0. We denote the ESD of Hbn = (Yb−z I)(Yb−z I) by νbn(·,z) and the ESD of Hen = (Ye−z I)(Ye−z I)byνen(·,z).

We will letL(F1,F2)be the Levy distance between two distribution functionsF1 andF2defined by L(F1,F2) =inf{ε:F1(xε)εF2(x)≤F1(x+ε) +εfor allx∈R}.

We then have the following Lemma.

Lemma 11. We have that

Ln(·,z),νen(·,z)) =o(n−ηδ/4)a.s.

where the bound is uniform for|z| ≤M . Proof. By[6, Corollary A.42]we have that

L4(ν(·,z),νbn(·,z))≤ 2

n2Tr(HnHbn)Tr[(YYb)(YYb)]. By the law of large numbers it follows that, with probability 1,

1

nTrHn= 1 n

Xm

a=1

X

1≤i,j≤n

|Ya,a+1;i,j|2+m|z|2−→m(1+|z|2).

Similarly, 1

nTr(Hbn)→m(1+|z|2)a.s.

(9)

For anyL>0, we have nδη

n Tr[(Y−Yb)(Y−Yb)] = nδη n

Xm

a=1

X

1≤i,j≤n

|(Y−Yb)a,a+1;i,j|2

nδη n2

Xm

a=1

X

1≤i,j≤n

pnYa,a+1;i,jI(p

n|Ya,a+1;i,j|>nδ)−EpnYa,a+1;i,jI(p

n|Ya,a+1;i,j|>nδ)

2

≤2nδη

 1 n2

m

X

a=1

X

1≤i,j≤n

|p

nYa,a+1;i,j|2I(p

n|Ya,a+1;i,j|>nδ) +E|ξ|2I(|ξ|>nδ)

≤ 2 n2

Xm

a=1

X

1≤i,j≤n

|p

nYa,a+1;i,j|2I(p

n|Ya,a+1;i,j|>L) +E|ξ|2I(|ξ|>L) and hence

lim sup

n→∞

nδη

n Tr[(Y−Yb)(Y−Yb)]≤4mE|ξ|2I(|ξ|>L)a.s.

which can be made arbitrarily small by makingLlarge. Thus we have that L(ν(·,z),νbn(·,z)) =o(n−ηδ/4)a.s.

where the bound is uniform for|z| ≤M. By[6, Corollary A.42]we also have that

L4(bν(·,z),νen(·,z))≤ 2

n2Tr(Hbn+Hen)Tr(YbYb)

1− 1 Æ

E|p

nbY1,2;1,1|2

. A similar argument shows that 1−Æ

E|p

nYb1,2;1,1|2=o(n−ηδ)and the proof is complete.

Remark 12. For the remainder of the subsection, we will assume the conditions of Theorem 2 hold.

Also, by Lemma 11 we additionally assume that|Ya,a+1;i,j| ≤nδ. 4.3.2 Useful tools and lemmas

We begin by denoting the Stieltjes transform ofνn(·,z)by

n(α,z) =

Z νn(dx,z) xα ,

whereα=x+i y with y>0. We also note that∆n(α,z) = mn1 Tr(G)whereG= (HnαI)−1 is the resolvent matrix. For brevity, the variablez will be suppressed when there is no confusion and we will simply write∆n(α).

We first present a number of lemmas that we will need to study∆n(α). We remind the reader that R=Yz I andα=x+i y.

(10)

Lemma 13. If y >0and xK for some compact set K, then we have the following bounds, kYk2≤ max

1≤k≤mkXkk2

m

X

k=1

kXkk2, (6)

kGk ≤ 1

y, (7)

kRGk ≤C r 1

y2 + 1

y, (8)

kGRk ≤C r 1

y2 + 1

y, (9)

for some constant C >0which depends on K. Moreover, there exists a constant C which depends only on K such that

sup

kRGk:xK,yyn,z∈C ≤C È 1

yn2 + 1

yn, (10)

sup

kGRk:xK,yyn,z∈C ≤C È 1

yn2 + 1

yn, (11)

for any sequence yn>0.

Proof. The first inequality in (6) follows from the definition of the norm and the second inequality is trivial. The resolvent bound in (7) follows immediately becauseHnis a Hermitian matrix.

To prove (8), we use polar decomposition to write R = U|R| where U is a partial isometry and

|R|=p

RR. Then

kRGk=kU|R|(RRα)1k ≤ k|R|(RRα)1k

≤ sup

tSp(RR)|p

t(tα)1| ≤sup

t≥0|p

t(tα)1| ≤C r 1

y2 + 1 y.

A similar argument verifies (9). (10) and (11) follow from (8) and (9) by using that yyn. Lemma 14. We have that

E 1

nTrGa,a

=E 1

mnTrG

for any1≤am.

Proof. Fix 1≤amand 1≤in. We will show that EGa,a;i,i=EGa+1,a+1;i,i.

Using the adjoint formula for the inverse of a matrix, we can write that for any 1≤ bm Gb,b;i,i= det(RRαI)(b,i)

det(RRαI)

(11)

where(RRαI)(b,i)is the matrixRRαI with the entries in the row and column that contain the element(RRαI)b,b;i,i replaced by zeroes except for the diagonal element which is replaced by a 1.

We will writeQb=XbXb+|z|2IαIand then note thatRRαI has the form

Qm −¯zX1 0 · · · 0 −zXm

zX1 Q1 −¯zX2 0 · · · 0

0 −zX2 Q2 ... 0 ...

... 0 ... ... −¯zXm−2 0

0 · · · 0 −zXm2 Qm2 −¯zXm1

−¯zXm 0 · · · 0 −zXm1 Qm−1

, (12)

whereQm,Q1, . . . ,Qm1appear along the diagonal.

Let σ= (1 2 3 . . .m)Sm. We now construct two bijective maps. Let Tσ be the map that takes matrices of the form (12) into the matrix where each occurrence ofXbis replaced byXσ(b)and each occurrence ofQb is replaced byQσ(b). Also, let

Ω =Cn2×Cn2× · · · ×Cn2

| {z }

mtimes

denote the probability space. Then we write ω ∈ Ω as ω = (X1,X2, . . . ,Xm). We now define Tσ0 :Ω→Ωby Tσ0(X1, . . . ,Xm) = (X2,X3, . . . ,Xm,X1). Since each X1, . . . ,Xm is an independent and identically distributed random matrix,Tσ0 is a measure preserving map.

We claim that det(RRαI) =det(Tσ(RRαI)). Indeed, ifλis an eigenvalue of(RRαI)with eigenvector v= (vm,v1, . . . ,vm−1)T where vbis an n-vector, then a simple computation reveals that w= (vσ(m),vσ(1), . . . ,vσ(m1))T= (v1, . . . ,vm)Tis an eigenvector ofTσ(RRαI)with eigenvalueλ. Similarly, det(RRαI)(b,i) = det

Tσ

(RRαI)(b,i)

. Define fa,i(ω) to be det(RRαI)(b,i)(ω)for each realizationω∈Ω. Then we have that

fa+1,i(ω) =det RRαI(a+1,i) (ω)

=det Tσ

RRαI(a+1,i) (ω)

=det Tσ RRαI(a,i) (ω)

=det RRαI(a,i)(Tσ0(ω)) =fa,i(Tσ0(ω)) and

det(RRαI)(ω) =det(RRαI)(Tσ0(ω)).

ThusGa,a;i,i(Tσ0(ω)) =Ga+1,a+1;i,i(ω)for eachω∈Ω. SinceTσ0 is measure preserving, the proof is complete.

Next, we present the decoupling formula, which can be found, for example, in[18]. Ifξis a real- valued random variable such thatE|ξ|p+2 <∞ and if f(t) is a complex-valued function of a real variable such that its firstp+1 derivatives are continuous and bounded, then

E[ξf(ξ)] =

p

X

a=0

κa+1

a! E[f(a)(ξ)] +ε, (13)

(12)

whereκaare the cumulants ofξand|ε| ≤Csupt|f(p+1)(t)|E|ξ|p+2where C depends only onp.

Ifξis a Gaussian random variable with mean zero, then all the cumulants vanish except forκ2and the decoupling formula reduces to the exact equation

E[ξf(ξ)] =E[ξ2]E[f0(ξ)].

Finally, to use (13), we need to compute the derivatives of the resolvent matrix G with respect to the various entries ofY. This can be done by utilizing the resolvent identity and we find

∂Ga,b;k,l

Re(Yc,c+1;q,p) =−(GR)a,c;k,qGc+1,b;p,lGa,c+1;k,p(RG)c,b;q,l,

∂Ga,b;k,l

Im(Yc,c+1;q,p) =−i(GR)a,c;k,qGc+1,b;p,l+iGa,c+1;k,p(RG)c,b;q,l. 4.3.3 Main Theorem

For the results below, we will considerα = x+i y where yyn with yn = n−ηδ. Our goal is to establish the following result.

Theorem 15. Under the conditions of Theorem 2 and the additional assumption that|Ya,a+1;i,j| ≤nδ, we have

3n(α,z) +2∆2n(α,z) +α+1− |z|2

αn(α,z) +1

α=rn(α,z),

where ifδis chosen such thatδη≤1/32andδ≤1/32, then the remainder term rn satisfies sup

|rn(α,z)|:|z| ≤M,|x| ≤N,yyn =O δn

a.s.

withδn=n1/4yn5nδ.

Remark 16. We note that the bounds presented here and in the rest of this section are not optimal and can be improved. The bounds given, however, are sufficient for our purposes.

In order to prove Theorem 15, we will need the following lemmas. The first lemma is McDiarmid’s Concentration Inequality[21].

Lemma 17(McDiarmid’s Concentration Inequality). Let X = (X1,X2, . . . ,Xn)be a family of indepen- dent random variables with Xk taking values in a set Ak for each k. Suppose that the real-valued f defined onQ

Ak satisfies

|f(x)−f(x0)| ≤ck

whenever the vectors x and x0 differ only in the kth coordinate. Let µ be the expected value of the random variable f(X). Then for any t≥0,

P |f(X)−µ| ≥t

≤2e−2t2/Pc2k.

Remark 18. McDiarmid’s Concentration Inequality also applies to complex-valued functions by ap- plying Lemma 17 to the real part and imaginary part separately.

(13)

Lemma 19. For yyn and|x| ≤N (whereα=x+i y), P |∆n(α,z)−E∆n(α,z)|>t

≤4e−c t2n yn4 (14) for some absolute constant c>0. Moreover,

sup

|∆n(α,z)−E∆n(α,z)|:|z| ≤M,|x| ≤N,yyn =O€

n1/4yn2Š

a.s.

Proof. Let Rk denote the matrix R with the k-th column replaced by zeroes. ThenRR and RkRk differ by a matrix with rank at most two. So by the resolvent identity

1

mnTr RRα1

− 1 mnTr€

RkRkαŠ1

≤ 2 mn

RRα−1€

RkRkRRŠ €

RkRkαŠ1

(15)

C n ynsup

t0

(tα)1t

=C0 1 n yn2

where the constant C0 depends only on N. The mn columns of Y form an independent family of random variables. We now apply Lemma 17 to the complex-valued function 1

mnTr(RRα)1 with the boundck=O(n1yn2)obtained in (15). This proves the bound (14). Thus, for any fixed point (α,z)in the region

{(α= x+i y,z=s+i t):|x| ≤N,yyn,|z| ≤M} (16) one has

P€|∆n(α,z)−E∆n(α,z)|>n−1/4yn−2Š

≤4e−cn1/2, (17)

where we recall that yn=n−ηδandδ >0 could be chosen to be arbitrary small.

If y =Imα >n1/4yn2, then

|∆n(α,z)| ≤ 1

Imα <n1/4yn2, |E∆n(α,z)|<n1/4yn2. (18) Therefore, it is enough to bound the supremum of|∆n(α)−E∆n(α)|over the region

D={(α=x+i y,z=s+i t):|x| ≤N, ynyn1/4yn2, |z| ≤M}. (19) To this end, we consider a finiten−C-net ofDwhereC is some sufficiently large positive constant to be chosen later. Clearly, one can construct such a net that contains at most[4M n4Cn1/4yn2]points if n is sufficiently large, where [k] denotes the integer part of k. Let us denote these points byi,zi), 1≤i≤[4M n4Cn1/4yn2]. It follows from (17) that one has

P€sup{i:|∆ni,zi)−E∆ni,zi)|>n1/4yn2Š

≤16M yn2n4C+1/4ecn1/2, (20) where the supremum is taken over the points of the net. Appying the Borel-Cantelli lemma, we obtain that

sup

i:|∆ni,zi)−E∆ni,zi)| =O€

n1/4yn2Š

a.s. (21)

(14)

where the supremum is again taken over the points of thenC-net ofD. To extend the estimate (21) to the supremum over the whole regionD, we note that for(α,z)∈ D,

n(α,z)

Reα ≤ 1

yn2, (22)

n(α,z)

Imα ≤ 1

yn2, (23)

n(α,z)

Rez

constm2(n1+M)

yn2 , (24)

n(α,z)

Imz

constm2(n1+δ+M)

yn2 , (25)

whereconstm is a constant that depends only onm.

The bounds (22-23) are simple properties of the Stieltjes transform. Indeed, the l.h.s. of (22) and (23) are bounded from above by 1

|Imα|2. The proof of (24-25) follows from the resolvent identitity (Hn(z2)−αI)1−(Hn(z1)−αI)1= (Hn(z1)−αI)1(Hn(z2)−Hn(z1))(Hn(z2)−αI)1, the formulaHn(z) = (Y(n)z I)(Y(n)z I), the bound|z| ≤M, and the bound

kY(n)k ≤n1. (26)

We note that (26) follows from the fact that the matrix entries ofY(n)are bounded bynδ.

Now, choosingC in the construction of the net sufficiently large, one extends the bound (21) to the whole regionDby (22-25). This finishes the proof of the lemma.

Lemma 20. For any1≤am, sup

Var

1 nTrGaa

:|x| ≤N,yyn,z∈C

=O(n1yn2) whereα=x+i y.

Proof. Let Rk denote the matrix R with the k-th column replaced by zeroes and let Pa be the or- thogonal projector such that TrGa,a =Tr(PaG Pa). Following the same procedure as in the proof of Lemma 19, we have that

Tr(RRα)a,a1−Tr(RkRkα)a,a1

= Tr”

Pa(RkRkα)−1(RkRkRR)(RRα)−1Pa—

(27)

C yn2.

where the constantC depends only onN.

We can write

1

nTrGa,a−E 1

nTrGa,a

= 1 n

Xmn

k=1

γk,

(15)

whereγkis the martingale difference sequence γk=Ek

1 nTrGa,a

−Ek1

1 nTrGa,a

andEkdenotes the conditional expectation with respect to the elements in the firstkcolumns ofY. Then by the bound in (27) and[6, Lemma 2.12], we have

E 1

nTrGa,a−E1 nTrGa,a

2

C n2

mn

X

k=1

k|2

C n yn2

where the constantC depends only onN. Since the bound holds for any|x| ≤N,yyn, andz∈C the proof is complete.

Remark 21. By Lemmas 14, 19, and 20, for every 1≤a,b,cm E

1 nTrGa,a

=E 1

mnTrG

= 1

mnTrG+O(n1/4yn5) a.s., E

1

nTrGa,a1 nTrGb,b

=E

– 1 mnTrG

2™

+O(n1/4yn5)

= 1

mnTrG 2

+O(n1/4yn5) a.s., and

E 1

nTrGa,a1

nTrGb,b1 nTrGc,c

=E

– 1 mnTrG

3™

+O(n1/4yn5)

= 1

mnTrG 3

+O(n1/4yn5) a.s.,

where the bounds hold uniformly in the region|x| ≤N,yyn, and|z| ≤M. We are now ready to prove Theorem 15.

Proof of Theorem 15. Fixα=x+i ywith|x| ≤N,yynandz∈Cwith|z| ≤M. We will show that the remainder termrn(α,z) =O(δn)a.s. where the constants in the termO(δn)depend only onN and M. In particular, the remainder term will be estimated using Lemmas 13 and 19 and Remark 21 where the bounds all hold uniformly in the region. In the proof presented below, will use the notationON,M(·)to represent a term which is bounded uniformly in the region|x| ≤N,yyn, and

|z| ≤M.

By applying the resolvent identity toGand replacingRandRwithYz I andY−¯z I, respectively, we obtain

1

nTrGa,a=−1 α+ 1

αnTr[GYY]a,az

αnTr[GY]a,az¯

αnTr[GY]a,a+|z|2 αnTrGa,a.

(16)

We will letY(r)be the(mn)×(mn)matrix containing the real entries ofY andY(i)be the(mn)×(mn) matrix containing the imaginary entries of Y such thatY = Y(r)+iY(i). By assumption, Y(r)and Y(i)are independent random matrices. Thus,

‚

1−|z|2 α

ΠE1

nTrGa,a+ 1 α = 1

αnETr(GYY(r))a,a+ i

αnETr(GYY(i))a,a

z

αnETr(GY(r))a,a+ iz

αnETr(GY(i))a,a (28)

z¯

αnETr(GY(r))a,azi¯

αnETr(GY(i))a,a

Letδ=Var(Re(ξ)). Then Var(Im(ξ)) =1−δ. To compute the expectation, we fix all matrix entries except one and integrate with respect to that entry. Thus, by applying the decoupling formula (13) withp=1 and using the fact thatYa,b;i,j=0 wheneverb6=a+1, we obtain the following expansions for the terms on the right-hand side of (28),

1

αnETr(GYY(r))a,a= 1

αnE X

1≤j,k,l≤n

Ga,a;j,kY¯a−1,a;l,kRe€

Ya−1,a;l,jŠ

= δ

αnETrGa,aδ

αn2E X

1j,k,l≤n

Y¯a1,a;l,k€

(GR)a,a1;j,lGa,a;j,kŠ

δ

αn2E X

1≤j,k,l≤n

Y¯a1,a;l,k€

Ga,a;j,j(RG)a1,a;l,k

Š+ON,M

‚ nδ n1/2yn4

Œ

= δ

αnETrGa,aδ

αn2E”TrGa,aTr(RGY)a−1,a−1

—+ON,M

‚ nδ n1/2yn4

Œ . Here we use that theεerror term in (13) contains the second derivative

2(GY)a,a;j,l

Re€

Ya−1,a;l,jŠ2 =ON,M

‚ 1 yn3

Œ

which consists of several terms each bounded by Lemma 13. After summing over 1≤ j,lnand utilizing the fact that the third moment of Re€

Ya−1,a;l,jŠ

is of order nδ−3/2, we obtain an error

参照

関連したドキュメント

(This construction of the classical linking number was later generalized by U Kaiser [16] to the case of arbitrary submanifolds of the linking dimension that are homologous into

[56] , Block generalized locally Toeplitz sequences: topological construction, spectral distribution results, and star-algebra structure, in Structured Matrices in Numerical

(4) The basin of attraction for each exponential attractor is the entire phase space, and in demonstrating this result we see that the semigroup of solution operators also admits

Keywords: continuous time random walk, Brownian motion, collision time, skew Young tableaux, tandem queue.. AMS 2000 Subject Classification: Primary:

By the algorithm in [1] for drawing framed link descriptions of branched covers of Seifert surfaces, a half circle should be drawn in each 1–handle, and then these eight half

This paper develops a recursion formula for the conditional moments of the area under the absolute value of Brownian bridge given the local time at 0.. The method of power series

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A

Global transformations of the kind (1) may serve for investigation of oscilatory behavior of solutions from certain classes of linear differential equations because each of