• 検索結果がありません。

tsirel@post.tau.ac.ilhttp://www.tau.ac.il/~tsirel/ BorisTsirelsonSchoolofMathematicsTelAvivUniversityTelAviv69978,Israel Brownianlocalminima,randomdensecountablesetsandrandomequivalenceclasses

N/A
N/A
Protected

Academic year: 2022

シェア "tsirel@post.tau.ac.ilhttp://www.tau.ac.il/~tsirel/ BorisTsirelsonSchoolofMathematicsTelAvivUniversityTelAviv69978,Israel Brownianlocalminima,randomdensecountablesetsandrandomequivalenceclasses"

Copied!
37
0
0

読み込み中.... (全文を見る)

全文

(1)

E l e c t ro nic

Jo u r n a l of

Pr

o ba b i l i t y

Vol. 11 (2006), Paper no. 7, pages 162–198.

Journal URL

http://www.math.washington.edu/∼ejpecp/

Brownian local minima, random dense countable sets and random equivalence classes

Boris Tsirelson School of Mathematics

Tel Aviv University Tel Aviv 69978, Israel tsirel@post.tau.ac.il http://www.tau.ac.il/~tsirel/

Abstract

A random dense countable set is characterized (in distribution) by independence and stationarity. Two examples areBrownian local minima and unordered infinite sample. They are identically distributed. A framework for such concepts, proposed here, includes a wide class of random equivalence classes.

Key words: Brownian motion, local minimum, point process, equivalence relation.

AMS 2000 Subject Classification: Primary 60J65; Secondary 60B99, 60D05, 60G55.

Submitted to EJP on January 29, 2006. Final version accepted on February 27, 2006.1

1This research was supported bythe israel science foundation(grant No. 683/05).

(2)

Introduction

Random dense countable sets arise naturally from the Brownian motion (local extrema, see [5, 2.9.12]), percolation (double, or four-arm points, see [2]), oriented percolation (points of type (2,1), see [3, Th. 5.15]) etc. They are scarcely investigated, because they fail to fit into the usual framework. They cannot be treated as random elements of

‘good’ (Polish, standard) spaces. The framework of adapted Poisson processes, used by Aldous and Barlow [1], does not apply to the Brownian motion, since the latter cannot be correlated with a Poisson process adapted to the same filtration. The ‘hit-and-miss’

framework used by Kingman [10] and Kendall [9] fails to discern the clear-cut distinction between Brownian local minima and, say, randomly shifted set of rational numbers. A new approach introduced here catches this distinction, does not use adapted processes, and shows that Brownian local minima are distributed like an infinite sample in the following sense (see Theorem 6.11).

Theorem. There exists a probability measure P on the product space C[0,1]×(0,1) such that

(a) the first marginal of P (that is, projection to the first factor) is the Wiener measure on the space C[0,1] of continuous paths w: [0,1]→R;

(b) the second marginal of P is the Lebesgue measure on the cube (0,1) of infinite (countable) dimension;

(c) P-almost all pairs (w, u), w ∈ C[0,1], u = (u1, u2, . . .) ∈ (0,1), are such that the numbers u1, u2, . . . are an enumeration of the set of all local minimizers of the Brownian path w.

Thus, the conditional distribution ofu1, u2, . . . given wprovides a (randomized) enumer- ation of Brownian minimizers by independent uniform random variables.

The same result holds for every random dense countable set that satisfies conditions of independence and stationarity, see Definitions 4.2, 6.8 and Theorem 6.9. Two-dimensional generalizations, covering the percolation-related models, are possible.

On a more abstract level the new approach is formalized in Sections 7, 8 in the form of ‘borelogy’ that combines some ideas of descriptive set theory [6] and diffeology [4].

Random elements of various quotient spaces fit into the new framework. Readers that like abstract concepts may start with these sections.

(3)

0 1 0

10

x1

x2 x3

x4

0 1

0 10

y1 y2

y3

y4

0 1

0 10

z1

z2 z3

z4

(a) (b) (c)

Figure 1: Different enumerations turn the same set into an infinite sample from: (a) the uniform distribution, (b) the triangular distribution, (c) their mixture (uniformz1, z3, . . . but triangular z2, z4, . . .).

1 Main lemma

Before the main lemma we consider an instructive special case.

1.1 Example. Letµbe the uniform distribution on the interval (0,1) and ν the triangle distribution on the same interval, that is,

µ(B) = Z

B

dx , ν(B) = Z

B

2xdx

for all Borel setsB ⊂(0,1). On the space (0,1) of sequences, the product measureµis the joint distribution of uniform i.i.d. random variables, whileνis the joint distribution of triangular i.i.d. random variables. I claim existence of a probability measure P on (0,1)×(0,1) such that

(a) the first marginal of P is equal toµ (that is, P(B×(0,1)) =µ(B) for all Borel sets B ⊂(0,1));

(b) the second marginal of P is equal to ν (that is, P((0,1)×B) = ν(B) for all Borel sets B ⊂(0,1));

(c) P-almost all pairs (x, y), x = (x1, x2, . . .) ∈ (0,1), y = (y1, y2, . . .) ∈ (0,1) are such that

{x1, x2, . . .}={y1, y2, . . .};

in other words, the sequenceyis a permutation of the sequencex. (A random permutation, of course.)

A paradox: the numbers yk are biased toward 1, the numbers xk are not; nevertheless they are just the same numbers!

(4)

An explanation (and a sketchy proof) is shown on Fig. 1(a,b). A countable subset of the strip (0,1)×(0,∞) is a realization of a Poisson point process. (The mean number of points in any domain is equal to its area.) The first 10 points of the same countable set are shown on both figures, but on Fig. 1(a) the points are ordered according to the vertical coordinate, while on Fig. 1(b) they are ordered according to the ratio of the two coordinates. We observe that {y1, . . . , y10} is indeed biased toward 1, while{x1, . . . , x10} is not. On the other hand,y is a permutation ofx. (This time, y1 =x1,y2 =x3,y3 =x2, . . . )

A bit more complicated ordering, shown on Fig. 1(c), serves the measureµ×ν×µ×ν×. . ., the joint distribution of independent, differently distributed random variables.

In every case we use an increasing sequence of (random) functions hn : (0,1) → [0,∞) such that for each n the graph of hn contains a Poisson point, while the region between the graphs of hn−1 and hn does not. The differences hn−hn−1 are constant functions on Fig. 1(a), triangular (that is, x7→ const·x) on Fig. 1(b), while on Fig. 1(c) they are constant for odd n and triangular for even n.

Moreover, the same idea works for dependent random variables. In this case hn−hn−1 is proportional to the conditional density, given the previous points. We only need existence of conditional densities and divergence of their sum (in order to exhaust the strip).

Here is the main lemma.

1.2 Lemma. Let µ be a probability measure on (0,1) such that

(a)for every nthe marginal distribution of the firstn coordinates is absolutely continuous;

(b) for almost all x∈(0,1) and µ-almost all (x1, x2, . . .)∈(0,1),

X

n=1

fn+1(x1, . . . , xn, x)

fn(x1, . . . , xn) =∞; here fn is the density of the first n coordinates.

Let ν be another probability measure on (0,1) satisfying the same conditions (a), (b).

Then there exists a probability measure P on (0,1)×(0,1) such that (c) the first marginal of P is equal to µ;

(d) the second marginal of P is equal to ν;

(e) P-almost all pairs (x, y), x = (x1, x2, . . .) ∈ (0,1), y = (y1, y2, . . .) ∈ (0,1) are such that

{x1, x2, . . .}={y1, y2, . . .}.

In other words, the sequence y is a permutation of the sequence x (since xk are pairwise different due to absolute continuity, as well as yk).

The rest of the section is occupied by the proof of the main lemma.

(5)

Throughout the proof, Poisson point processes on the strip (0,1)×[0,∞) (or its measurable part) are such that the mean number of points in any measurable subset is equal to its two-dimensional Lebesgue measure. Random variables (and random functions) are treated here as measurable functions of the original Poisson point process on the strip.

We start with three rather general claims.

1.3 Claim. (a) A Poisson point process on the strip (0,1)×[0,∞) may be treated as the set Π of random points (Un, T1 +· · ·+Tn) for n = 1,2, . . ., where U1, T1, U2, T2, . . . are independent random variables, each Uk is distributed uniformly on (0,1), and each Tk is distributed Exp(1) (that is, P¡

Tk > t¢

= e−t for t ≥0);

(b) conditionally on (U1, T1), the set Π1 = {(Un, T1 +· · ·+Tn) : n ≥ 2} is (distributed as) a Poisson point process on (0,1)×[T1,∞).

The proof is left to the reader.

1.4 Claim. Let f : (0,1) → [0,∞) be a measurable function satisfying R1

0 f(x) dx = 1, and Π be a Poisson point process on (0,1)×[0,∞). Then

(a) the minimum

t1 = min

(x,y)∈Π

y f(x) (wherey/0 = ∞) is reached at a single point (x1, y1)∈Π;

(b) x1 and t1 = y1/f(x1) are independent, t1 is distributed Exp(1), and the distribution of x1 has the density f;

(c) conditionally on(x1, y1), the setΠ1 = Π\{(x1, y1)}is (distributed as)a Poisson point process on {(x, y) : 0 < x <1, t1f(x)< y < ∞}.

Proof. A special case, f(x) = 1 for all x, follows from 1.3.

A more general case, f(x)>0 for all x, results from the special case by the transforma- tion (x, y) 7→ (F(x), y/f(x)) where F(x) = Rx

0 f(x0) dx0. The transformation preserves Lebesgue measure on (0,1)×[0,∞), therefore it preserves also the Poisson point process.

In the general case the same transformation sends A×[0,∞) to (0,1)×[0,∞), where A={x :f(x)>0}. Conditionally on (x1, t1) we get a Poisson point process on {(x, y) : x∈A, t1f(x)< y < ∞}independent of the Poisson point process on{(x, y) :x /∈A, 0<

y <∞}.

1.5 Claim. Letf : (0,1)→[0,∞)andg : (0,1)×(0,1)→[0,∞)be measurable functions satisfying R1

0 f(x) dx = 1 and R1

0 g(x1, x2) dx2 = 1 for almost all x1. Let Π be a Poisson point process on (0,1)×[0,∞), while (x1, y1), t1 and Π1 be as in 1.4. Then

(a) the minimum

t2 = min

(x,y)∈Π1

y−t1f(x) g(x1, x) is reached at a single point (x2, y2)∈Π1;

(6)

(b)conditionally on (x1, y1)we have: x2 andt2 = (y2−t1f(x2))/g(x1, x2)are independent, t2 is distributed Exp(1), and the distribution of x2 has the density g(x1,·) (conditional distributions are meant);

(c) conditionally on(x1, y1),(x2, y2), the set Π2 = Π\ {(x1, t1),(x2, y2)} is a Poisson point process on {(x, y) : 0 < x <1, t1f(x) +t2g(x1, x)< y <∞}.

Proof. After conditioning on (x1, y1) we apply 1.4 to the Poisson point process {(x, y− t1f(x)) : (x, y)∈Π1} on (0,1)×[0,∞) and the function g(x1,·) : (0,1)→[0,∞).

Equipped with these claims we prove the main lemma as follows. Introducing conditional densities

gn+1(x|x1, . . . , xn) = fn+1(x1, . . . , xn, x)

fn(x1, . . . , xn) , g1(x) =f1(x)

and a Poisson point process Π on the strip (0,1)×[0,∞), we construct a sequence of points (Xn, Yn) of Π, random variables Tn and random functions Hn as follows:

T1 = min

(x,y)∈Π

y

g1(x) = Y1

g1(X1); H1(x) = T1g1(x) ;

T2 = min

(x,y)∈Π1

y−H1(x)

g2(x|X1) = Y2−H1(X2) g2(X2|X1) ; H2(x) = H1(x) +T2g2(x|X1) ;

. . . Tn = min

(x,y)∈Πn−1

y−Hn−1(x)

gn(x|X1, . . . , Xn−1) = Yn−Hn−1(Xn) gn(Xn|X1, . . . , Xn−1); Hn(x) = Hn−1(x) +Tngn(x|X1, . . . , Xn−1) ;

. . .

here Πn stands for Π\ {(X1, Y1), . . . ,(Xn, Yn)}. By 1.4(b), X1 and T1 are independent, T1 ∼ Exp(1) and X1 ∼ g1. By 1.4(c) and 1.5(b), conditionally on X1 and T1, Π1 is a Poisson point process on {(x, y) : y > H1(x)}, while X2 and T2 are independent, T2 ∼Exp(1) andX2 ∼g2(·|X1). It follows thatT1,T2 and (X1, X2) are independent, and the joint distribution of X1, X2 has the density g1(x1)g2(x2|x1) = f2(x1, x2). By 1.5(c), Π2 is a Poisson point process on {(x, y) : y > H2(x)} conditionally, given (X1, Y1) and (X2, Y2).

The same arguments apply for any n. We get two independent sequences, (T1, T2, . . .) and (X1, X2, . . .). Random variables Tn are independent, distributed Exp(1) each. The joint distribution of X1, X2, . . . is equal to µ, since for every n the joint distribution of X1, . . . , Xn has the densityfn. Also, Πnis a Poisson point process on{(x, y) :y > Hn(x)} conditionally, given (X1, Y1), . . . ,(Xn, Yn).

1.6 Claim. Hn(x)↑ ∞ for almost all pairs (Π, x).

(7)

Proof. By 1.2(b), P

ngn+1(x|x1, . . . , xn) = ∞ for almost all x ∈ (0,1) and µ-almost all (x1, x2, . . .). Therefore P

ngn+1(x|X1, . . . , Xn) = ∞ for almost all x ∈ (0,1) and almost all Π. It is easy to see that P

ncnTn =∞ a.s. for each sequence (cn)n such that P

ncn = ∞. Taking into account that (T1, T2, . . .) is independent of (X1, X2, . . .) we conclude that P

nTngn(x|X1, . . . , Xn−1) = ∞ for almost all x and Π. The partial sums of this series are nothing but Hn(x).

Still, we have to prove that the points (Xn, Yn) exhaust the set Π. Of course, a non-random negligible set does not intersect Π a.s.; however, the negligible set {x: limnHn(x)<∞}

is random.

1.7 Claim. The set ∩nΠn is empty a.s.

Proof. It is sufficient to prove that∩nΠn,M =∅ a.s. for every M ∈(0,∞), where Πn,M = {(x, y) ∈ Πn : y < M}. Conditionally, given (X1, Y1), . . . ,(Xn, Yn), the set Πn,M is a Poisson point process on{(x, y) :Hn(x)< y < M}; the number|Πn,M| of points in Πn,M

satisfies

n,M

¯(X1, Y1), . . . ,(Xn, Yn

= Z 1

0

(M −Hn(x))+dx . Therefore

E|Πn,M|=E Z 1

0

(M −Hn(x))+dx . By 1.6 and the monotone convergence theorem, E R1

0(M −Hn(x))+dx → 0 as n → ∞. Thus, limn|Hn,M|= 0 a.s.

Now we are in position to finish the proof of the main lemma. Applying our construction twice (for µ and for ν) we get two enumerations of a single Poisson point process on the strip,

{(Xn, Yn) :n= 1,2, . . .}= Π ={(Xn0, Yn0) :n = 1,2, . . .},

such that the sequence (X1, X2, . . .) is distributed µ and the sequence (X10, X20, . . .) is distributed ν. The joint distribution P of these two sequences satisfies the conditions 1.2(c,d,e).

2 Random countable sets

Following the ‘constructive countability’ approach of Kendall [9, Def. 3.3] we treat a random countable subset of (0,1) as

(2.1) ω7→ {X1(ω), X2(ω), . . .}

whereX1, X2,· · ·: Ω→(0,1) are random variables. (To be exact, it would be called a ran- dom finite or countable set, sinceXn(ω) need not be pairwise distinct.) It may happen that

(8)

{X1(ω), X2(ω), . . .}={Y1(ω), Y2(ω), . . .}for almost allω(the sets are equal, multiplicity does not matter); then we say that the two sequences (Xk)k, (Yk)k of random variables represent the same random countable set, and write{X1, X2, . . .}={Y1, Y2, . . .}. On the other hand it may happen that the joint distribution of X1, X2, . . . is equal to the joint distribution of some random variables X10, X20,· · ·: Ω0 →(0,1) on some probability space Ω0; then we may say that the two random countable sets{X1, X2, . . .}, {X10, X20, . . .} are identically distributed. We combine these two ideas as follows.

2.2 Definition. Two random countable sets {X1, X2, . . .}, {Y1, Y2, . . .} are identically distributed (in other words, {Y1, Y2, . . .} is distributed like {X1, X2, . . .}), if there exists a probability measure P on the space (0,1)×(0,1) such that

(a) the first marginal of P is equal to the joint distribution of X1, X2, . . .; (b) the second marginal ofP is equal to the joint distribution ofY1, Y2, . . .;

(c) P-almost all pairs (x, y), x = (x1, x2, . . .) ∈ (0,1), y = (y1, y2, . . .) ∈ (0,1) are such that

{x1, x2, . . .}={y1, y2, . . .}.

A sufficient condition is given by Main lemma 1.2: if Conditions (a), (b) of Main lemma are satisfied both by the joint distribution of X1, X2, . . . and by the joint distribution of Y1, Y2, . . . then {X1, X2, . . .} and {Y1, Y2, . . .} are identically distributed.

2.3 Remark. The relation defined by 2.2 is transitive. Having a joint distribution of two sequences (Xk)k and (Yk)k and a joint distribution of (Yk)k and (Zk)k we may construct an appropriate joint distribution of three sequences (Xk)k, (Yk)k and (Zk)k; for example, (Xk)k and (Zk)k may be made conditionally independent given (Yk)k.

2.4 Definition. A random countable set {X1, X2, . . .} has the uniform distribution (in other words, is uniform), if {X1, X2, . . .} and {Y1, Y2, . . .} are identically distributed for some (therefore, all) Y1, Y2, . . . whose joint distribution satisfies Conditions (a), (b) of Main lemma 1.2.

The joint distribution of X1, X2, . . . may violate Condition 1.2(b), see 9.8.

If X1, X2, . . . are i.i.d. random variables then the random countable set {X1, X2, . . .} may be called an unordered infinite sample from the corresponding distribution. If the latter has a non-vanishing density on (0,1) then {X1, X2, . . .} has the uniform distri- bution. A paradox: the distribution of the sample does not depend on the underlying one-dimensional distribution! (See also 9.9.)

2.5 Remark. If the joint distribution ofX1, X2, . . . satisfies Condition (a) of Main lemma (but may violate (b)) then the random countable set {X1, X2, . . .} may be treated as a part (subset) of a uniform random countable set, in the following sense.

We say that{X1, X2, . . .}is distributed as a part of{Y1, Y2, . . .}if there existsP satisfying (a), (b) of 2.2 and (c) modified by replacing the equality{x1, x2, . . .}={y1, y2, . . .}with the inclusion{x1, x2, . . .} ⊂ {y1, y2, . . .}.

(9)

Indeed, the proof of Main lemma uses Condition (b) only for exhausting all elements of the Poisson random set.

3 Selectors

A single-element part of a random countable set is of special interest.

3.1 Definition. Aselector of a random countable set{X1, X2, . . .}is a probability mea- sure P on the space (0,1)×(0,1) such that

(a) the first marginal of P is equal to the joint distribution of X1, X2, . . .; (b) P-almost all pairs (x, z), x= (x1, x2, . . .)∈(0,1), z ∈(0,1) satisfy

(3.2) z ∈ {x1, x2, . . .}.

The second marginal of P is called the distribution of the selector.

Less formally, a selector is a randomized choice of a single element. The conditional distribution Px of z given x is a probability measure concentrated on {x1, x2, . . .}. This measure may happen to be a single atom, which leads to a non-randomized selector

(3.3) z =xN(x1,x2,...),

where N is a Borel map (0,1) → {1,2, . . .}. (See also 3.9.)

In order to prove existence of selectors with prescribed distributions we use a deep duality theory for measures with given marginals, due to Kellerer. It holds for a wide class of spaces X1,X2, but we need only two. Below, in 3.4, 3.5 and 3.6 we assume that

X1 is either (0,1) or (0,1), X2 is either (0,1) or (0,1). Here is the result used here and once again in Sect. 8.

3.4 Theorem. (Kellerer) Let µ1, µ2 be probability measures on X1,X2 respectively, and B ⊂ X1× X2 a Borel set. Then

Sµ12(B) =Iµ12(B),

where Sµ12(B) is the supremum of µ(B) over all probability measures µ on X1 × X2 with marginals µ1, µ2, and Iµ12(B)is the infimum of µ1(B1) +µ2(B2) over all Borel sets B1 ⊂ X1, B2 ⊂ X2 such that B ⊂(B1× X2)∪(X1×B2).

See [8, Corollary 2.18 and Proposition 3.3].

Note thatB need not be closed.

In fact, the infimumIµ12(B) is always reached [8, Prop. 3.5], but the supremumSµ12(B) is not always reached [8, Example 2.20].

(10)

3.5 Remark. (a) Ifµ1, µ2 are positive (not just probability) measures such thatµ1(X1) = µ2(X2) then still Sµ12(B) = Iµ12(B).

(b) Sµ12(B) is equal to the supremum of ν(B) over all positive (not just probability) measuresν onB such thatν1 ≤µ1 and ν2 ≤µ2, whereν1, ν2 are the marginals ofν. This new supremum in ν is reached if and only if the original supremum in µis reached.

3.6 Lemma. Let µ1, µ2 be probability measures on X1,X2 respectively and B ⊂ X1× X2

a Borel set such that

Sµ1−ν12−ν2(B) =Sµ12(B)−ν(B)

for every positive measure ν on B such that ν1 ≤ µ1 and ν2 ≤ µ2, where ν1, ν2 are the marginals of ν. Then the supremum Sµ12(B) is reached.

(See also 8.12.)

Proof. First, taking a positive measure ν on B such that ν1 ≤ µ1, ν2 ≤ µ2 and ν(B) ≥

1

2Sµ12(B) we get

Sµ1−ν12−ν2(B) =Sµ12(B)−ν(B)≤ 1

2Sµ12(B).

Second, taking a positive measure ν0 on B such that ν10 ≤ µ1 −ν1, ν20 ≤ µ2 −ν2 and ν0(B)≥ 12Sµ1−ν12−ν2(B) we get (ν+ν0)1 ≤µ1 and (ν+ν0)2 ≤µ2, thus,

Sµ1−ν1−ν102−ν2−ν20(B) =Sµ12(B)−ν(B)−ν0(B)≤ 1

4Sµ12(B).

Continuing this way we get a convergent series of positive measures,ν+ν000+. . .; its sum is a measure that reaches the supremum indicated in 3.5(b).

3.7 Lemma. Let a random countable set {X1, X2, . . .} satisfy

(3.8) for every Borel set B ⊂(0,1) of positive measure, B∩ {X1, X2, . . .} 6=∅ a.s.

Then the random set has a selector distributed uniformly on (0,1).

Proof. We apply Theorem 3.4 toX1 = (0,1),X2 = (0,1), µ1 — the joint distribution of X1, X2, . . ., µ2 — the uniform distribution on (0,1), and B — the set of all pairs (x, z) satisfying (3.2). By (3.8), B intersects B1 ×B2 for all Borel sets B1 ⊂ X1, B2 ⊂ X2 of positive measure. Therefore Iµ12(B) = 1. By the same argument, all absolutely continuous measures ν1, ν2 on X1,X2 respectively, such that ν1(X1) = ν2(X2), satisfy Iν12(B) =ν1(X1). By 3.5(a),Sν12(B) = Iν12(B). Thus, the condition of Lemma 3.6 is satisfied (by µ1, µ2 and B). By 3.6, some measure P reaches Sµ12(B) = Iµ12(B) = 1 and therefore P is the needed selector.

(11)

3.9 Counterexample. In Lemma 3.7 one cannot replace ‘a selector’ with ‘a non- randomized selector (3.3)’. Randomization is essential!

Let Ω = (0,1) (with Lebesgue measure) and{X1(ω), X2(ω), . . .}= (12ω+Q)∩(0,1) where Q⊂Ris the set of rational numbers (and 12ω+Qis its shift by 12ω). Then (3.8) is satisfied (since B +Q is of full measure), but every selector Z : Ω → (0,1) of the form Z(ω) = XN(ω)(ω) has a nonuniform distribution. Proof: let Aq = {ω ∈ (0,1) : Z(ω)− 12ω = q} for q∈Q, then

Z ∈B¢

= 2X

q∈Q

Z

B

1Aq(2(x−q)) dx ,

which shows that the distribution ofZ has a density taking on the values 0,2,4, . . . only.

4 Independence

According to (2.1), our ‘random countable set’ {X1, X2, . . .} can be finite, but cannot be empty. This is why in general we cannot treat the intersection, say, {X1, X2, . . .} ∩(0,12) as a random countable set.

By a random dense countable subset of (0,1) we mean a random countable subset {X1, X2, . . .}of (0,1), dense in (0,1) a.s. Equivalently, P¡

∃k a < Xk < b¢

= 1 whenever 0≤a < b≤1. A random dense countable subset of another interval is defined similarly.

It is easy to see that {X1, X2, . . .} ∩(a, b) is a random dense countable subset of (a, b) whenever {X1, X2, . . .} is a random dense countable subset of (0,1) and (a, b) ⊂ (0,1).

We call{X1, X2, . . .} ∩(a, b) a fragment of {X1, X2, . . .}.

It may happen that two (or more) fragments can be described by independent sequences of random variables; such fragments will be called independent. The definition is formulated below in terms of random variables, but could be reformulated in terms of measures on (0,1).

4.1 Definition. Let{X1, X2, . . .}be a random dense countable subset of (0,1). We say that two fragments {X1, X2, . . .} ∩(0,12) and {X1, X2, . . .} ∩[12,1) of {X1, X2, . . .} are independent, if there exist random variables Y1, Y2, . . . (on some probability space) such that

(a) {Y1, Y2, . . .} is distributed like {X1, X2, . . .}; (b) Y2k−1 < 12 and Y2k12 a.s. for k = 1,2, . . .;

(c) the random sequence (Y2, Y4, Y6, . . .) is independent of the random sequence (Y1, Y3, Y5, . . .).

Similarly we define independence of n fragments {X1, X2, . . .} ∩[ak−1, ak), k = 1, . . . , n, for any n= 2,3, . . . and any a0, . . . , an such that 0 =a0 < a1 <· · ·< an= 1.

4.2 Definition. A random dense countable subset {X1, X2, . . .} of (0,1) satisfies the independence condition, if for every n = 2,3, . . . and every a0, . . . , an such that 0 =

(12)

a0 < a1 < · · · < an = 1 the n fragments {X1, X2, . . .} ∩ [ak−1, ak) (k = 1, . . . , n) are independent.

Such random dense countable sets are described below, assuming that each Xk has a density, in other words,

(4.3) for every Borel set B ⊂(0,1) of measure 0, B ∩ {X1, X2, . . .}=∅ a.s.

(In contrast to Main lemma, existence of joint densities is not assumed. See also 5.10.) 4.4 Proposition. For every random dense countable set satisfying (4.3) and the inde- pendence condition there exists a measurable function r : (0,1) → [0,∞] such that for every Borel set B ⊂(0,1)

(a) if R

Br(x) dx=∞ then the set B∩ {X1, X2, . . .} is infinite a.s.;

(b) if R

Br(x) dx < ∞ then the set B ∩ {X1, X2, . . .} is finite a.s., and the number of its elements has the Poisson distribution with the mean R

Br(x) dx.

Note thatr(·) may be infinite. See also 9.5.

The proof is given after some remarks and lemmas.

4.5 Remark. The function r is determined uniquely (up to equality almost every- where) by the random dense countable set and moreover, by its distribution. That is, if {Y1, Y2, . . .}is distributed like{X1, X2, . . .}and 4.4(a,b) hold for{Y1, Y2, . . .}and another function r1, thenr1(x) = r(x) for almost allx∈(0,1).

4.6 Remark. The function r is just the sum

r(·) =f1(·) +f2(·) +. . . of the densities fk of Xk.

4.7 Remark. For every measurable function r : (0,1) → [0,∞] such that Rb

ar(x) dx =

∞ whenever 0 ≤ a < b ≤ 1, there exists a random dense countable set {X1, X2, . . .} satisfying (4.3), the independence condition and 4.4(a,b) (for the given r). See also 9.6 and [10, Sect. 2.5].

Ifr(x) =∞ almost everywhere, we take an unordered infinite sample.

Ifr(x)<∞almost everywhere, we take a Poisson point process with the intensity measure r(x) dx.

Otherwise we combine an unordered infinite sample on {x : r(x) = ∞} and a Poisson point process on{x:r(x)<∞}.

(13)

In order to prove 4.4, for a given B ⊂ (0,1) we denote by ξ(x, y) the (random) number of elements (maybe,∞) in the set B∩(x, y)∩ {X1, X2, . . .}and introduce

α(x, y) =P¡

ξ(x, y) = 0¢ , β(x, y) =E exp(−ξ(x, y)) for 0≤x < y ≤1. (Of course, exp(−∞) = 0.) Clearly,

(4.8) ¡

1−e−1¢¡

1−α(x, y)¢

≤1−β(x, y)≤1−α(x, y) for 0≤x < y ≤1. By (4.3) and the independence condition,

ξ(x, y) +ξ(y, z) = ξ(x, z), α(x, y)α(y, z) =α(x, z), β(x, y)β(y, z) = β(x, z) whenever 0≤x < y < z≤1.

4.9 Lemma. If β(0,1)6= 0 then β(x−ε, x+ε)→1 as ε→0+ for every x∈(0,1).

Proof. Let x1 < x2 < . . ., xk → x. Random variables ξ(xk, xk+1) are independent. By Kolmogorov’s 0–1 law, the event ξ(xk, x) → 0 is of probability 0 or 1. It cannot be of probability 0, since then ξ(x1, x) =∞ a.s., which implies β(0,1) = 0. Thus, ξ(xk, x)→0 a.s., therefore β(xk, x)→1 andβ(x−ε,1)→1. Similarly,β(x, x+ε)→1.

4.10 Lemma. If β(0,1)6= 0 then α(0,1)6= 0.

Proof. By 4.9, for every x ∈ (0,1) there exists ε > 0 such that β(x−ε, x+ε) > e−1, thereforeα(x−ε, x+ε)6= 0 by (4.8). (Forx= 0, x= 1 we use one-sided neighborhoods.) Choosing a finite covering and using multiplicativity of α we get α(0,1)6= 0.

4.11 Lemma. If α(0,1) 6= 0 then ξ(0,1) has the Poisson distribution with the mean

−lnα(0,1).

Proof. By 4.9 and (4.8), α(x −ε, x +ε) → 1. We define a nonatomic finite positive measureµ on [0,1] by

µ([x, y]) = −lnα(x, y) for 0≤x < y ≤1

and introduce a Poisson point process on [0,1] whose intensity measure is µ. Denote by η(x, y) the (random) number of Poisson points on [x, y], then Eη(x, y) = µ([x, y]) and P¡

η(x, y) = 0¢

= exp¡

−Eη(x, y)¢

= α(x, y) = P¡

ξ(x, y) = 0¢

. In other words, the two random variablesξ(x, y)∧1 andη(x, y)∧1 are identically distributed (of course, a∧b= min(a, b)). By independence, for anynthe joint distribution ofnrandom variables

(14)

ξ¡k−1

n ,kn¢

∧1 (for k = 1, . . . , n) is equal to the joint distribution of n random variables η¡k−1

n ,kn¢

∧1. Taking into account that ξ(0,1) = lim

n→∞

n

X

k=1

µ

ξ³k−1 n ,k

n

´∧1

and the same for η, we conclude that ξ(0,1) and η(0,1) are identically distributed.

4.12 Remark. In addition (but we do not need it),

(a) the joint distribution of ξ(r, s) for all rational r, s such that 0 ≤ r < s≤ 1 (this is a countable family of random variables) is equal to the joint distribution of all η(r, s), (b) the random finite set B∩ {X1, X2, . . .} is distributed like the Poisson point process, (c) the measure µhas the density (f1+f2+. . .)·1B, where fk is the density ofXk. Proof of Proposition 4.4. We take r=f1+f2+. . ., note thatR

Br(x) dx=Eξ(0,1) and prove (b) first.

(b) LetR

Br(x) dx <∞, thenξ(0,1)<∞a.s., therefore β(0,1)6= 0. By 4.10,α(0,1)6= 0.

By 4.11,ξ(0,1) has a Poisson distribution.

(a) Let R

Br(x) dx = ∞, then Eξ(0,1) = ∞, thus ξ(0,1) cannot have a Poisson distri- bution. By the argument used in the proof of (b), β(0,1) = 0. Therefore ξ(0,1) = ∞ a.s.

5 Selectors and independence

We consider a random dense countable subset {X1, X2, . . .}of (0,1), satisfying the inde- pendence condition and (3.8). If (4.3) is also satisfied then (3.8) means that the corre- sponding functionr (see 4.4) is infinite almost everywhere.

A uniformly distributed selector exists by 3.7. Moreover, there exists a pair of independent uniformly distributed selectors. It follows via Th. 3.4 from the fact that{X1, X2, . . .} × {X1, X2, . . .}intersects a.s. any given setB ⊂(0,1)×(0,1) of positive measure. Hint: we may assume that B ⊂ (0, θ)×(θ,1) for some θ ∈(0,1); consider independent fragments {Y1, Y3, . . .}= (0, θ)∩ {X1, X2, . . .}, {Y2, Y4, . . .}= [θ,1)∩ {X1, X2, . . .}; almost surely, the first fragment intersects the first projection of B, and the second fragment intersects the corresponding section of B.

However, we need a stronger statement: for every selector Z1 there exists a selector Z2

distributed uniformly and independent of Z1; here is the exact formulation. (The proof is given after Lemma 5.8.)

5.1 Proposition. Let{X1, X2, . . .}be a random dense countable subset of(0,1)satisfying the independence condition and (3.8). Let a probability measure P1 on (0,1)×(0,1) be a selector of {X1, X2, . . .} (as defined by 3.1). Then there exists a probability measure P2

(15)

on (0,1)×(0,1)2 such that, denoting points of (0,1)×(0,1)2 by (x,(z1, z2)), we have (w.r.t. P2)

(a) the joint distribution of x and z1 is equal to P1; (b) the distribution of z2 is uniform on (0,1);

(c) z1, z2 are independent;

(d) z2 ∈ {x1, x2, . . .} a.s. (where (x1, x2, . . .) = x).

Conditioning on z1 decomposes the two-selector problem into a continuum of single- selector problems. In terms of conditional distributions P1(dx|z1), P2(dxdz2|z1) we need the following:

(e) z2 ∈ {x1, x2, . . .} forP2(·|z1)-almost all ((x1, x2, . . .), z2);

(f) the distribution of x according toP2(dxdz2|z1) is P1(dx|z1);

(g) the distribution of z2 according to P2(dxdz2|z1) is uniform on (0,1).

That is, we need a uniformly distributed selector of a random set distributedP1(·|z1). To this end we will transfer (3.8) from the unconditional joint distribution of X1, X2, . . . to their conditional joint distributionP1(·|z1).

5.2 Proposition. Let X1, X2, . . . and Y1, Y2, . . . be as in Def. 4.1, and P¡

X1 < 12¢

>

0. Then for almost all x1 ∈ (0,12) (w.r.t. the distribution of X1), the conditional joint distribution ofY2, Y4, . . . given X1 =x1 is absolutely continuous w.r.t. the (unconditional) joint distribution of Y2, Y4, . . .

The proof is given before Lemma 5.7.

5.3 Counterexample. Condition 4.1(c) is essential for 5.2 (in spite of the fact that Y1, Y3, . . . are irrelevant).

We take independent random variables Z1, Z2, . . . such that each Z2k−1 is uniform on (0,12) and each Z2k is uniform on (12,1). We define Y1, Y2, . . . as follows. First, Y2k−1 = Z2k−1. Second, if the k-th binary digit of 2Y1 is equal to 1 then Y4k−2 = min(Z4k−2, Z4k) and Y4k = max(Z4k−2, Z4k); otherwise (if the digit is 0), Y4k−2 = max(Z4k−2, Z4k) and Y4k = min(Z4k−2, Z4k).

We get a random dense countable set{Y1, Y2, . . .}whose fragments{Y1, Y2, . . .}∩(0,12) = {Y1, Y3, . . .} and {Y1, Y2, . . .} ∩(12,1) = {Y2, Y4, . . .} are independent. However, Y1 is a function of Y2, Y4, . . . (and of course, the conditional joint distribution of Y2, Y4, . . . given Y1 is singular to their unconditional joint distribution).

In order to prove 5.2 we may partition the event X1 < 12 into events X1 =Y2k−1. Within such event the condition X1 = x1 becomes just Y2k−1 = x1. However, it does not make the matter trivial, since the eventX1 =Y2k−1 need not belong to theσ-field generated by Y1, Y3, . . . (nor to theσ-field generated by X1).

(16)

digression: nonsingular pairs

Sometimes dependence between two random variables reduces to a joint density (w.r.t.

their marginal distributions). Here are two formulation in general terms.

5.4 Lemma. Let (Ω,F, P) be a probability space and C ⊂ Ω a measurable set. The following two conditions on a pair of sub-σ-fields F1,F2 ⊂ F are equivalent:

(a) there exists a measurable function f : Ω×Ω→[0,∞) such that P(A∩B∩C) =

Z

A×B

f(ω1, ω2)P(dω1)P(dω2) for all A∈ F1, B ∈ F2;

(b) there exists a measurable function g :C×C→[0,∞) such that P(A∩B∩C) =

Z

(A∩C)×(B∩C)

g(ω1, ω2)P(dω1)P(dω2) for all A∈ F1, B ∈ F2.

(Note that f, g may vanish somewhere, and C need not belong to F1 orF2.) Proof. (b) =⇒ (a): just takef(ω1, ω2) =g(ω1, ω2) for ω1, ω2 ∈C and 0 otherwise.

(a) =⇒ (b): we consider conditional probabilities h1 = P¡ C¯

¯F1¢

, h2 = P¡ C¯

¯F2¢ , note that h1(ω)>0,h2(ω)>0 for almost allω ∈C and define

g(ω1, ω2) = f(ω1, ω2)

h11)h22) for ω1, ω2 ∈C . Then

Z

(A∩C)×(B∩C)

g(ω1, ω2)P(dω1)P(dω2) =

= Z

A×B

f(ω1, ω2)1C1)1C2)

h11)h22) P(dω1)P(dω2). (The integrand is treated as 0 outsideC×C.) Assuming thatf is (F1⊗ F2)-measurable (otherwisef may be replaced with its conditional expectation) we see that the conditional expectation of the integrand, given F1⊗ F2, is equal to f(ω1, ω2). Thus, the integral is

· · ·= Z

A×B

f(ω1, ω2)P(dω1)P(dω2) =P(A∩B∩C).

5.5 Definition. Let (Ω,F, P) be a probability space and C ⊂Ω a measurable set. Two sub-σ-fields F1,F2 ⊂ F are a nonsingular pair within C, if they satisfy the equivalent conditions of Lemma 5.4.

(17)

5.6 Lemma. (a) Let C1 ⊂ C2. If F1,F2 are a nonsingular pair within C2 then they are a nonsingular pair within C1.

(b) Let C1, C2, . . . be pairwise disjoint and C =C1∪C2∪. . . If F1,F2 are a nonsingular pair within Ck for each k then they are a nonsingular pair within C.

(c) Let E1 ⊂ F be another sub-σ-field such that E1 ⊂ F1 within C in the sense that

∀E ∈ E1 ∃A∈ F1 (A∩C =E∩C).

If F1,F2 are a nonsingular pair within C then E1,F2 are a nonsingular pair within C.

Proof. (a) We define two measures µ1, µ2 on (Ω,F1)×(Ω,F2) by µk(Z) = P(Ck∩ {ω : (ω, ω)∈Z}) for k = 1,2. Clearly, µk(A×B) =P(A∩B ∩Ck). Condition 5.4(a) for Ck

means absolute continuity of µk (w.r.t. P|F1 ×P|F2). However, µ1 ≤µ2. (b) Using the first definition, 5.4(a), we just take f =f1+f2+. . . (c) Immediate, provided that the second definition us used, 5.4(b).

end of digression

Proof of Prop. 5.2. It is sufficient to prove that the twoσ-fieldsσ(X1) (generated by X1) and σ(Y2, Y4, . . .) are a nonsingular pair within the event C = {X1 < 12}. Without loss of generality we assume that Y1, Y3, . . . are pairwise different a.s. (otherwise we skip redundant elements via a random renumbering). We partitionC into eventsCk={X1 = Y2k−1}. Lemma 5.6(b) reduces C to Ck. By 5.6(c) we replace σ(X1) with σ(Y2k−1). By 5.6(a) we replace Ck with the whole Ω. Finally, the σ-fields σ(Y2k−1), σ(Y2, Y4, . . .) are a nonsingular pair within Ω, since they are independent.

5.7 Lemma. Let {X1, X2, . . .} be a random dense countable subset of (0,1) satisfying the independence condition and (3.8). Then for every Borel set B ⊂ (0,1) of positive measure,

{X2, X3, . . .} ∩B 6=∅¯

¯X1

¢= 1 a.s.

Proof. First, we assume in addition that mes¡

B∩(12,1)¢

>0 (‘mes’ stands for Lebesgue measure) and P¡

X1 < 12¢

= 1. Introducing Yk according to Def. 4.1 we note that {Y2, Y4, . . .} ∩B ⊃ {X1, X2, . . .} ∩B ∩(12,1) 6= ∅ a.s. by (3.8). It follows via Prop. 5.2 that P¡

{Y2, Y4, . . .} ∩B 6= ∅¯

¯X1

¢ = 1 a.s. Taking into account that {X2, X3, . . .} ⊃ {Y2, Y4, . . .} we get P¡

{X2, X3, . . .} ∩B 6=∅¯

¯X1

¢ = 1 a.s.

Similarly we consider the case mes¡

B∩(0,12

>0 and P¡

X112¢

= 1.

Assuming both mes¡

B∩(0,12

>0 and mes¡

B∩(12,1)¢

>0 we get the same conclusion for arbitrary distribution of X1.

The same arguments work for any thresholdθ ∈(0,1) instead of 12. It remains to note that for everyB there existsθsuch that both mes¡

B∩(0, θ)¢

>0 and mes¡

B∩(θ,1)¢

>0.

(18)

The claim of Lemma 5.7 is of the form∀B ¡ P¡

. . .¯

¯X1

¢= 1 a.s.¢

, but the following lemma gives more: P¡

∀B (. . .)¯

¯X1

¢ = 1 a.s.

5.8 Lemma. Let {X1, X2, . . .} be a random dense countable subset of (0,1) satisfying the independence condition and (3.8). Denote by ν the distribution of X1 and by µx1

the conditional joint distribution of X2, X3, . . . given X1 = x1. (Of course, µx1 is well- defined for ν-almost all x1.) Then ν-almost all x1 ∈ (0,1) are such that for every Borel set B ⊂(0,1) of positive measure,

µx1

¡©(x2, x3, . . .) :{x2, x3, . . .} ∩B 6=∅ª¢

= 1.

Proof. The proof of 5.7 needs only tiny modification, but the last paragraph (about θ) needs some attention. The exceptional set ofx1 may depend onθ, which is not an obstacle since we may use only rational θ. Details are left to the reader.

Proof of Prop. 5.1. We apply Lemma 5.8 to the sequence (Z1, X1, X2, . . .) rather than (X1, X2, . . .); here Z1 is the given selector. More formally, we consider the image of the given measureP1 under the map (0,1)×(0,1)→(0,1)defined by ¡

(x1, x2, . . .), z1

¢7→

(z1, x1, x2, . . .).

Lemma 5.8 introduces ν (the distribution of Z1) and µz1 (the conditional distribution of (X1, X2, . . .) given Z1 = z1), and states that (3.8) is satisfied by µz1 for ν-almost all z1. Applying 3.7 toµz1 we get a probability measure ˜µz1 on (0,1)×(0,1) such that the first marginal of ˜µz1 is equal toµz1, the second marginal of ˜µz1 is the uniform distribution on (0,1), and ˜µz1-almost all pairs ¡

(x1, x2, . . .), z2

¢ satisfy z2 ∈ {x1, x2, . . .}.

In order to combine measures ˜µz1 into a measure P2 we need measurability of the map z1 7→µ˜z1.

The set of all probability measures on (0,1)×(0,1) is a standard Borel space (see [7], Th. (17.24) and the paragraph after it), and the map µ7→ µ(B) is Borel for every Borel set B ⊂ (0,1) ×(0,1). (In fact, these maps generate the Borel σ-field on the space of measures.) It follows easily that the subset M of the space of measures, introduced below, is Borel. Namely, M is the set of all µ such that the second marginal of µ is the uniform distribution on (0,1) and µ is concentrated on the set of ¡

(x1, x2, . . .), z2

¢ such that z2 ∈ {x1, x2, . . .}. Also, the first marginal of µis a Borel function of µ (which means a Borel map from the space of measures on (0,1)×(0,1) into the similar space of measures on (0,1)).

The conditional measureµz1 is aν-measurable function ofz1 definedν-almost everywhere;

it may be chosen to be a Borel map from (0,1) to the space of measures on (0,1). In addition we can ensure that eachµz1 is the first marginal of some ˜µz1 ∈M. It follows that these ˜µz1 ∈M can be chosen as aν-measurable (maybe not Borel, see [13, 5.1.7]) function of z1, by the (Jankov and) von Neumann uniformization theorem, see [7, Sect. 18A] or [13, Sect. 5.5].

(19)

Now we combine these ˜µz1 into a probability measure P2 on (0,1)×(0,1)2 such that, denoting a point of (0,1)×(0,1)2 by¡

(x1, x2, . . .),(z1, z2

we have: z1 is distributedν, and P2(dxdz2|z1) = ˜µz1(dxdz2).

It remains to note that P2 satisfies (e), (f), (g) formulated after Prop. 5.1. The first marginal of ˜µz1 = P2(·|z1) is equal to µz1 = P1(·|z1), which verifies (f). The second marginal of ˜µz1 = P2(·|z1) is the uniform distribution on (0,1), which verifies (g). And z2 ∈ {x1, x2, . . .}almost sure w.r.t. ˜µz1 =P2(·|z1), which verifies (e).

Prop. 5.1 is a special case (n = 1) of Prop. 5.9 below; the latter shows that for every n selectorsZ1, . . . , Zn there exists a selectorZn+1 distributed uniformly and independent of Z1, . . . , Zn.

5.9 Proposition. Let{X1, X2, . . .}be a random dense countable subset of(0,1)satisfying the independence condition and (3.8). Let n ∈ {1,2, . . .} be given, andPn be a probability measure on (0,1)×(0,1)n such that

(i) the first marginal of Pn is equal to the joint distribution of X1, X2, . . .;

(ii) Pn-almost all pairs (x, z), x = (x1, x2, . . .), z = (z1, . . . , zn) are such that {z1, . . . , zn} ⊂ {x1, x2, . . .}.

Then there exists a probability measure Pn+1 on (0,1) ×(0,1)n+1 such that, denoting points of (0,1)×(0,1)n+1 by(x,(z1, . . . , zn+1)), we have (w.r.t. Pn+1)

(a) the joint distribution of x and (z1, . . . , zn) is equal to Pn; (b) the distribution of zn+1 is uniform on (0,1);

(c) zn+1 is independent of (z1, . . . , zn);

(d) zn+1 ∈ {x1, x2, . . .} a.s. (where (x1, x2, . . .) =x).

The proof, quite similar to the proof of Prop. 5.1, is left to the reader, but some hints follow. Two independent fragments of a random set are used in 5.8, according to the partition of (0,1) into (0, θ) and [θ,1), where θ∈(0,1) is rational. One part contains z1, the other part contains a portion of the given setB of positive measure. Now, dealing with z1, . . . , znwe still partition (0,1) in two parts, but they are not just intervals. Rather, each part consists of finitely many intervals with rational endpoints. Still, the independence condition gives us two independent fragments.

Here is another implication of the independence condition. In some sense the proof below is similar to the proof of 5.1, 5.9. There, (3.8) was transferred to conditional distributions via 5.2. Here we do it with (4.3).

5.10 Lemma. Let {X1, X2, . . .} be a random dense countable subset of (0,1) satisfying the independence condition and (4.3). If P¡

Xk=Xl

¢= 0 whenever k 6=l then for every n the joint distribution of X1, . . . , Xn is absolutely continuous.

Proof. Once again, I restrict myself to the case n = 2, leaving the general case to the reader.

(20)

The marginal (one-dimensional) distribution of any Xn is absolutely continuous due to (4.3). It is sufficient to prove that the conditional distribution ofX2givenX1is absolutely continuous, that is, P¡

X2 ∈ B¯

¯X1¢

= 0 a.s. for all negligible B ⊂ (0,1) simultaneously.

By Prop. 5.2 it holds for X1 < 12 and B ⊂ (12,1). Similarly, it holds for X1 < θ and B ⊂(θ,1), or X1 > θ and B ⊂(0, θ), for all rational θ simultaneously. Therefore it holds always.

5.11 Remark. In order to have an absolutely continuous distribution of X1, . . . , Xn for a givenn, the condition P¡

Xk=Xl

¢= 0 is needed only for k, l∈ {1, . . . , n}, k 6=l.

6 Main results

Recall Definitions 4.2 (the independence condition) and 2.4 (the uniform distribution of a random countable set).

6.1 Theorem. A random dense countable subset {X1, X2, . . .} of (0,1), satisfying the independence condition, has the uniform distribution if and only if

(6.2) P¡

B∩ {X1, X2, . . .} 6=∅¢

=

(0 if mes(B) = 0, 1 if mes(B)>0 for all Borel sets B ⊂(0,1). (Here ‘mes’ is Lebesgue measure.)

Proof. If it has the uniform distribution then we may assume that X1, X2, . . . are inde- pendent, uniform on (0,1), which makes (6.2) evident.

Let (6.2) be satisfied. In order to prove that {X1, X2, . . .} has the uniform distribution, it is sufficient to construct a probability measureµon (0,1)×(0,1) such that the first marginal of µ is the joint distribution of X1, X2, . . ., the second marginal of µ satisfies Conditions (a), (b) of Main lemma 1.2, and {x1, x2, . . .} = {z1, z2, . . .} for µ-almost all

¡(x1, x2, . . .),(z1, z2, . . .)¢ .

To this end we construct recursively a consistent sequence of probability measures µn on (0,1)×(0,1)n (with the prescribed first marginal) such that for alln,

(6.3) {x1, . . . , xn} ⊂ {z1, . . . , z2n} ⊂ {x1, x2, . . .} for µ2n-almost all ¡

(x1, x2, . . .),(z1, . . . , z2n)¢ , and

(6.4) z2n+1 is distributed uniformly and independent of z1, . . . , z2n

w.r.t. µ2n+1, and

(6.5) z1, . . . , zn are pairwise different

参照

関連したドキュメント

It is natural to conjecture that, as δ → 0, the scaling limit of the discrete λ 0 -exploration path converges in distribution to a continuous path, and further that this continuum λ

We give examples of: (1) a contigual zero space which is not weakly regular and is not a Cauchy space; (2) a sep- arated filter space which is a z-regular space but not a

p-Laplacian operator, Neumann condition, principal eigen- value, indefinite weight, topological degree, bifurcation point, variational method.... [4] studied the existence

In [7], assuming the well- distributed points to be arranged as in a periodic sphere packing [10, pp.25], we have obtained the minimum energy condition in a one-dimensional case;

For a positive definite fundamental tensor all known examples of Osserman algebraic curvature tensors have a typical structure.. They can be produced from a metric tensor and a

It is not a bad idea but it means that since a differential field automorphism of L|[x 0 ] is given by a birational transformation c 7→ ϕ(c) of the space of initial conditions, we

7.1. Deconvolution in sequence spaces. Subsequently, we present some numerical results on the reconstruction of a function from convolution data. The example is taken from [38],

A topological space is profinite if it is (homeomorphic to) the inverse limit of an inverse system of finite topological spaces. It is well known [Hoc69, Joy71] that profinite T 0