• 検索結果がありません。

2 Approximative optimal best choice solution

N/A
N/A
Protected

Academic year: 2022

シェア "2 Approximative optimal best choice solution"

Copied!
22
0
0

読み込み中.... (全文を見る)

全文

(1)

El e c t ro nic J

o f

Pr

ob a bi l i t y

Electron. J. Probab.17(2012), no. 54, 1–22.

ISSN:1083-6489 DOI:10.1214/EJP.v17-2172

Approximative solutions of best choice problems

Andreas Faller

Ludger Rüschendorf

Abstract

We consider the full information best choice problem from a sequenceX1, . . . , Xn

of independent random variables. Under the basic assumption of convergence of the corresponding imbedded point processes in the plane to a Poisson process we establish that the optimal choice problem can be approximated by the optimal choice problem in the limiting Poisson process. This allows to derive approximations to the optimal choice probability and also to determine approximatively optimal stopping times. An extension of this result to the bestm-choice problem is also given.

Keywords:best choice problem; optimal stopping; Poisson process.

AMS MSC 2010:60G40; 62L15.

Submitted to EJP on July 28, 2011, final version accepted on July 13, 2012.

1 Introduction

The best choice problem of a random sequenceX1, . . . , Xn is to find a stopping time τ 6nto maximize the best choice probabilityP(Xτ =Mn), whereMn = max16i6nXi, under all stopping timesτ 6n. Thus we aim to find an optimal stopping time Tn 6n such that

P(XTn=Mn) = maxP(Xτ =Mn) (1.1) over all stopping timesτ 6n.

Gilbert and Mosteller (1966) [9] found the solution of the full information best choice problem, where X1, . . . , Xn are iid with known continuous distribution functionF. In this case the optimal stopping time is given by

Tn= min{k6n|Xk=Mk, F(Xk)>bn−k} (1.2) whereb0= 0,Pi

j=1 i j

(b−1i −1)jj−1= 1, i= 1,2, . . .

The asymptotic behaviour ofbi is described bybi ↑ 1,i(1−bi)→c= 0.8043. . . The optimal probabilityvn =P(XTn =Mn)does not depend on F, is strictly decreasing in nand has limiting valuev

vn →v= 0.580164. . . (1.3)

University Freiburg, Germany. On June 30, 2011, Andreas Faller died completely unexpected. He was an extraordinary talented young researcher.

University Freiburg, Germany. E-mail:ruschen@stochastik.uni-freiburg.de

(2)

A continuous time version of the problem with random number of points given by a homogeneous Poisson process with intensityλwas studied in Sakaguchi (1976) [21].

Asλ→ ∞the same limitvappears as limiting optimal choice probability as observed in Berezovski˘ı and Gnedin (1984) [1] and Gnedin and Sakaguchi (1992) [13]. In Gnedin (1996) [12] this limiting optimal choice probabilityv was identified as optimal choice probability in an associated plane Poisson process on[0,1]×(−∞,0]with intensity mea- sureλ[0,1]⊗λ(−∞,0]. The link with the original problem was established by an explicit imbedding of finite iid sequences in the Poisson process. Multistop best choice prob- lems for Poisson processes were considered in Sakaguchi (1991) [22] and Saario and Sakaguchi (1992) [19].

The approach in Gnedin (1996) [12] was extended in [KR]1(2000c) [16] to the best choice problem for (inhomogeneous) discounted sequencesXi=ciYi, where(Yi)are iid andciare constants which imply convergence of imbedded normalized point processes Nn=Pn

i=1E(i n,Xi−bn

an )to some Poisson processN in the plane. The proofs in that paper make use of Gnedin’s (1966) [12] result as well as of some general approximation re- sults in [KR] (2000a) [15]. The aim of this paper is to extend this approach to general inhomogeneous best choice problems for independent sequences under the basic as- sumption of convergence of the imbedded point processesNn to some Poisson process N in the plane. Subsequently we also consider an extension to them-choice problem, wheremchoices described by stopping times1 6T1 <· · ·< Tm 6nare allowed and the aim is to findmstopping timesT1m<· · ·< Tmm6nwith

P(XT1m∨ · · · ∨XTmm=Mn) = supP(

m

_

i=1

XTi =Mn) (1.4)

thesupbeing over all stopping timesT1<· · ·< Tm6n.

For the corresponding generalized Moser problem of maximizing E Xτ resp.

EWm

i=1Xτia general approximation approach has been developed in [KR] (2000a), [FR]

(2011) [7] form= 1, resp. in [KR] (2002) [18] and [FR] (2011) [6] form>1; see also Goldstein and Samuel-Cahn (2006) [14]. For a detailed history of this problem we refer to Ferguson (2007) [8] form= 1resp. to [FR] (2011) [6] in casem>1. Our results for (1.3) are in particular applicable to sequencesXi=ciZi+diwith iid random sequences (Zi)and with discount and observation factorsci,di. The corresponding results for the Moser type problems for these sequences can be found in [FR] (2011) [6, 7].

There are further interesting types of choice problems as e.g. the expected rank choice problem (see Bruss and Ferguson (1993) [3], Saario and Sakaguchi (1995), [20]), multiple buying selling problems Gnedin (1981) [10], Bruss and Ferguson (1997) [4], Bruss (2010) [2] or multicriteria extensions Gnedin (1992) [11] which are not dealt with in this paper.

2 Approximative optimal best choice solution

We consider the optimal best choice problem (1.1) for a sequence (Xi)of indepen- dent random variables, i.e. to find optimal stopping timesTn 6nsuch that

P(XTn=Mn) = sup

τ6n

P(Xτ =Mn), (2.1)

1Kühne and Rüschendorf is abbreviated within this paper with [KR], Faller with [F], Faller and Rüschendorf with [FR].

(3)

over all stopping timesτ 6n. The basic assumption in our approach is convergence in distribution of the imbedded planar point process to a Poisson point processN,

Nn=

n

X

i=1

δ(i n,Xin)

d N (2.2)

onMc= [0,1]×(c,∞]. HereXin= Xia−bn

n is a normalization ofXitypically induced from a form of the central limit theorem for maxima wherean>0,bn∈Randc∈R∪ {−∞}. We consider Poisson processesN restricted onMc which may have infinite intensity along the lower boundary[0,1]× {c}. We assume that the intensity measureµofN is a Radon measure on Mc with the relative topology induced by the usual topology on [0,1]×Rand withµ([0,1]× {∞}) = 0. Thus any subset ofMc bounded away from the lower boundary has only finitely many points ofNandN has no points with value∞.

Convergence in distribution ‘Nn

d N onMc’ means convergence in distribution of the point processes restricted onMc. This convergence is defined on the Polish space of locally finite point measures supplied with the topology of vague convergence. Since the Poisson limitN has no points with infinite value this definition prevents that records of Nnmay disappear to infinity, whenNn

d N. For some general conditions to imply this convergence and examples see [KR] (2000a,b) [15, 17] or [FR] (2011) [7]. The typical examples from extreme value theory concern the casec=−∞andc= 0. We generally assume that the intensity measure µis Lebesgue-continuous with density denoted as f(t, x). Thus the Poisson process

N=X

k>1

δk,Yk)onMc (2.3)

does not have multiple points.

We consider also the best choice problem for the limiting continuous time Poisson process N. AnN-stopping time T : Ω → [0,1] is a stopping time w.r.t. the filtration At:=σ(N(· ∩[0, t]×(c,∞])),t∈[0,1]. For anN-stopping timeT we define

YT =Yk ifT =τk for somekandYT =−∞else.

For stopping problems with guarantee valuex≥cwe define the stopping valueYt=x forT = 1which corresponds to replacing all random variablesXjnbyXjn∨xin the point processNn.

AnN-stopping timeT is called ‘optimal best choice stopping time’ forNif P(YT = sup

k

Yk) = supP(YS = sup

k

Yk) (2.4)

the supremum being taken over all N-stopping timesS. This definition of optimality implies that only those stopping times are of interest, which stop in some time pointτk

of N.

In the following theorem we derive the optimal stopping timeT for the continuous time best choice problem for the Poisson process N. Further we show that the best choice problem forX1, . . . , Xn is approximated by the best choice problem forN. This allows us to get approximations of the best choice probabilitiesvn=P(XTn=Mn)and to construct asymptotically optimal best choice stopping sequencesTˆn. Our approxima- tion result needs the following intensity condition, which is not necessary, when dealing only with the limiting best choice problem (see Section 3).

(I) Intensity Condition: For allt∈[0,1)letµ((t,1]×(c,∞]) =∞.

(4)

Theorem 2.1. Let the imbedded point processNnconverge in distribution to the Pois- son processN onMcand let the intensity condition (I) hold true. Then we get:

(a) The optimal best choice stopping timeT forNis given by T = inf{τk|Yk = sup

τj∈[0,τk]

Yj, Yk> v(τk)}, inf∅:= 1 (2.5) where the thresholdv: [0,1]→[c,∞)is a solution of the integral equation

Z 1 t

Z v(t)

eµ((r,1]×(v(t),y])µ(dr, dy) = 1 ∀t∈[0,1);

v(1) =c

(2.6)

v is monotonically nonincreasing and can be chosen right continuous. The optimal probability for the best choice problem forN is given by

s1 := P(YT = sup

k

Yk)

= Z 1

0

e−µ([0,r]×(c,∞])Z v(r)

e−µ((r,1]×(y,∞])µ(dr, dy) (2.7)

+ Z 1

0

Z v(r0) c

Z 1 r0

e−µ([0,r]×(y0,∞])Z y0∨v(r)

e−µ((r,1]×(y,∞])µ(dr, dy)µ(dr0, dy0).

(b) Approximation of the optimal best choice probabilities holds true:

n→∞lim vn= lim

n→∞P(XTn =Mn) =s1.

(c) Tˆn:= min{16i6n|Xi =Mi, Xi> anv(ni) +bn}defines an asymptotically optimal sequence of stopping times, i.e.

n→∞lim P(XTˆ

n=Mn) =s1.

Proof. For (t, x) ∈ [0,1)×[c,∞) we consider the stopping problem after timetn with guarantee valuex. We want to determine optimal stopping timesTn(t, x)> tnforXjn, 16j6n, which maximize

P(Xτn=x∨ max

tn<j6nXjn)

under all stopping timesτ > tn. For the proof of convergence properties we consider the stopping behaviour after timetnhere (and not only after time zero) in order to make use of the recursive structure of the optimal stopping times. Define fori > tn

Zin(t, x) =P

Xin=x∨ max

tn<j6nXjn|X1n, . . . , Xin . Then we have to maximize EZτn(t, x) = P Xτn =x∨maxtn<j6nXjn

. By the classical recursive equation for optimal stopping of finite sequences the optimal stopping times Tn(t, x)are given by (see e.g. Proposition 2.1 in [FR] (2011) [6])Tn(t, x) := Tn>tn(t, x) whereTn>n(t, x) :=nand

Tn>k(t, x) = min

k < i6n|P(Xin=x∨ max

tn<j6nXjn|X1n, . . . , Xin)

> P(XTn>i

n (t,x)=x∨ max

tn<j6nXjn|X1n, . . . , Xin), Xin=x∨ max

tn<j6iXjn

= min

k < i6n|P(Xin> max

i<j6nXjn|Xin)

> P(XTn>i

n (t,x)=Xin∨ max

i<j6nXjn|X1n, . . . , Xin), Xin=x∨ max

tn<j6iXjn .

(5)

By backward induction inl = n−1, . . . ,btncwe obtain fortn < i 6 l that Tn>l(t, x) = Tn>l(ni, Xin)on {Xin = x∨maxtn<j6iXjn} and further that Tn>l(t, x) is independent of σ(X1n, . . . , Xbtncn . Thus

Tn(t, x) = min

tn < i6n|P(Xin> max

i<j6nXjn|Xin)

> P(XTn

n(ni,Xin)=Xin∨ max

i<j6nXjn|Xin), Xin=x∨ max

tn<j6iXjn

= min

tn < i6n|hn(ni, Xin)> gn(ni, Xin), Xin=x∨ max

tn<j6iXjn where

gn(t, x) :=P(XTnn(t,x)=x∨ max

tn<j6nXjn) hn(t, x) :=P( max

tn<j6nXjn 6x). (2.8)

hn is monotonically nondecreasing in(t, x). As consequence we obtain from point pro- cess convergence thathnconverges uniformly in compact sets in[0,1]×(c,∞]to

h(t, x) :=P(N((t,1]×(x,∞]) = 0) =e−µ((t,1]×(x,∞]).

To prove that gn(t, x) also converges uniformly on compact sets in [0,1]×(c,∞] we decomposegninto two monotone components.

gn(t, x) = sup

τ >tn stopping times

P(Xτn=x∨ max

tn<j6nXjn)

= sup

τ >tn stopping times

P(Xτn=x∨ max

tn<j6nXjn, max

tn<j6nXjn> x)

= sup

τ >tn stopping times

P(Xτn∨x= max

tn<j6n(Xjn∨x), max

tn<j6nXjn> x)

= sup

τ >tn stopping times

P(Xτn∨x= max

tn<j6n(Xjn∨x))−P( max

tn<j6nXjn6x)

=: ˜gn(t, x)−hn(t, x). (2.9)

By assumption the pointprocessNhas only finitely many points in any subset bound- ed away from the lower boundaryc. In consequence point process convergenceNn

d

N implies that ˜gn(t, x)converges pointwise for all t, xto a limit ˜g(t, x). A detailed argument for this convergence has been given for the related case of maximizingEXτn in [FR] (2011) [7, Theorem 4.1] resp. [F] (2009) [5, Satz 4.1]. This argument transfers to the optimal choice case in a similar way. Using monotonicity properties ofg˜n(t, x)in t, xwe next prove that˜gis continuous which then implies uniform convergence. On one hand side we have fors < tandx > c

˜

gn(s, x)6 sup

τ >sn stopping times

P(Xτn∨x> max

tn<j6n(Xjn∨x))

6 sup

τ >sn stopping times

P(Xτn∨x> max

tn<j6n(Xjn∨x), max

sn<j6tn(Xjn∨x)6 max

tn<j6n(Xjn∨x)) +P( max

sn<j6tn(Xjn∨x)> max

tn<j6n(Xjn∨x)) 6 g˜n(t, x) +P( max

sn<j6tnXjn> x).

(6)

On the other hand

˜

gn(s, x) > sup

τ >sn stopping times

P(Xτn∨x> max

sn<j6n(Xjn∨x))

> sup

τ >sn stopping times

P(Xτn∨x> max

tn<j6n(Xjn∨x))

−P( max

sn<j6tn(Xjn∨x)> max

tn<j6n(Xjn∨x))

> ˜gn(t, x)−P( max

sn<j6tnXjn> x).

This implies

|˜gn(s, x)−˜gn(t, x)|6P( max

sn<j6tnXjn> x).

and thus using thatx > ccontinuity of˜g(t, x)int. To prove continuity of˜ginxletx < y. Then

0 6 g˜n(t, y)−g˜n(t, x)

6 sup

τ >sn stopping times

P(XTn∨y= max

tn<j6n(Xjn∨y), XTn∨x < max

tn<j6n(Xjn∨x)) 6 P(x < max

tn<j6nXjn6y).

In consequence ˜g(t, x)is continuous andgn → g uniformly on compact subsets of [0,1]×(c,∞). Point process convergence and the representation in (2.8) imply that

g(t, x) =P YT(t,x)=x∨ sup

t<τk61

Yk with theN-stopping time

T(t, x) = inf

τk > t|Yk =x∨ sup

t<τj6τk

Yj, hk, Yk)> gk, Yk) . The argument above also applies to the modified random variables

in:= sup

i−1 n k6ni

Yk.

Definingg,has above we obtain in this casegn(t, x)↓g(t, x)since the discrete stopping problems majorize the continuous time stopping problem. In consequence T(t, x)are the optimal stopping times forN, i.e.

g(t, x) = sup

τ >t N-stopping time

P Yτ =x∨ sup

t<τk61

Yk

,

Details of this approximation argument are given in [KR] (2000a) (proof of Theorem 2.5). As a result we get the following estimate

gn(t, x)>P(XTnˆ

n(t,x)=x∨ max

tn<j6nXjn)→g(t, x), with the stopping times

n(t, x) := min

tn < i6n|Xin=x∨ max

tn<j6iXjn, h(ni, Xin)> g(ni, Xin) . Thus the limitg(t, x)is the same for(Xin)and for( ˜Xin).

(7)

The arguments above in the casex > ccan also be extended to the casex=c. Note that forx > c

g(t, x) 6 P YT(t,x)= sup

t<τk61

Yk

6 sup

T >t N-stopping time

P YT = sup

t<τk61

Yk

6g(t, x) +h(t, x).

From the intensity assumption (I) and sinceh(t, c) = 0we obtain g(t, c) := lim

x↓cg(t, x) =P YT(t,c)= sup

t<τk61

Yk

= sup

τ >t N-stopping time

P Yτ= sup

t<τk61

Yk

.

Since forx > c

gn(t, x)6gn(t, c)6gn(t, x) +hn(t, x) it follows that

n→∞lim gn(t, c) =g(t, c).

By assumption (I)P(N((t,1]×(c,∞])>1) >0fort∈[0,1). Thusg(t, x)is mono- tonically nonincreasing inxwithg(t, c) >0and g(t,∞) = 0. Also by (I)h(t, x)is monotonically nondecreasing inxwithh(t, c) = 0andh(t,∞) = 1. In consequence there existsv(t)∈(c,∞)with

h(t, v(t)) =g(t, v(t)). (2.10) This implies the representation

T(t, x) = inf

τk > t|Yk =x∨ sup

t<τj6τk

Yj, Yk> v(τk) .

We next prove that v is monotonically nonincreasing. Since g(t, x)−h(t, x) is monotonically nonincreasing inxfor anytit is sufficient to show that fors < t

g(s, v(t))−h(s, v(t))>0. (2.11) To that aim we get fors < t

g(t, x)−g(s, x) 6 P(YT(t,x)=x∨sup

τk>t

Yk)−P(YT(t,x)=x∨ sup

τk>s

Yk) 6 P(x∨ sup

s<τk6t

Yk > YT(t,x)=x∨sup

τk>t

Yk) 6 P( sup

s<τk6t

Yk> x)g(t, x).

The first inequality results from definition ofg(s, x), the second inequality is immedi- ate and the third inequality is a consequence from independence of the (Yk). On the other hand

h(t, x)−h(s, x) =P( sup

s<τk6t

Yk > x)h(t, x).

Building the difference of these (in-)equalities and using (2.10) we obtain forx=v(t) the claim in (2.11).

(8)

Monotonicity ofvallows us to determineg(t, x). Note that forx>v(t)

g(t, x) := P ∃n0:∀n>n0:∃2in ∈(t,1] :∃2jn > x: N((i−12n,2in]×(j−12n ,2jn]) = 1, N((t,i−12n]×(x,∞]) = 0,andN((2in,1]×(2jn,∞]) = 0

= Z 1

t

e−µ((t,r]×(x,∞])Z x

e−µ((r,1]×(y,∞])µ(dr, dy), and in casex6v(t)

g(t, x) =P(∃n0:∀n>n0:∃2in ∈(t,1] :∃2jn > x:

N((i−12n ,2in]×(j−12n ,2jn]) = 1for 2jn > v(2in), N((t,i−12n]×(x,∞]) = 0, andN((2in,1]×(2jn,∞]) = 0)

+P(∃n0:∀n>n0:∃t < 2in0 <2in 61,∃x < 2jn0 < 2jn : N((i−12n ,2in]×(j−12n ,2jn]) = 1for 2jn > v(2in), N((i02−1n ,2in0]×(j02−1n ,2jn0]) = 1for 2jn0 < v(2in0),

N((t,i−12n]×(2jn0,∞]) = 0,andN((2in,1]×(2jn,∞]) = 0)

= Z 1

t

e−µ((t,r]×(x,∞])Z x∨v(r)

e−µ((r,1]×(y,∞])µ(dr, dy)

+

Z v−1(x) t

Z v(r0) x

Z 1 r0

e−µ((t,r]×(y0,∞])Z y0∨v(r)

e−µ((r,1]×(y,∞])µ(dr, dy)µ(dr0, dy0).

This implies

g(t, v(t)) = Z 1

t

e−µ((t,r]×(v(t),∞])Z v(t)

e−µ((r,1]×(y,∞])µ(dr, dy).

From condition (2.10) this is equal toh(t, v(t)) =e−µ((t,1]×(v(t),∞]), and as consequence we obtain equality from (2.6).

Remark (triangular sequences) As remarked by a reviewer the proof of Theorem 2.1 extends in the same way to the optimal stopping of triangular sequences(Xin)and point process convergence as in (2.2). The asymptotically optimal sequence of stopping times in Theorem 2.1, (c) then is given by

n:= min

1≤i≤n|Xin =Min, Xin > v ni (2.12) i. e. lim

n→∞P

Xnˆ

Tn=Mn

=s1. (2.13)

HereMn= max

1≤i≤nXinandMin= max

1≤j≤iXjn.

In general the optimal best choice probabilitys1in (2.7) has to be evaluated numer- ically. Certain classes of intensity functions however allow an explicit evaluation.

Example 2.2. Consider densities of the intensity measureµof the form

f(t, y) =−a(t)F0(y), (2.14)

where a : [0,1] → [0,∞] is continuous and integrable, a is not identical zero in any neighbourhood of 1 andF : [c,∞] → Rmonotonically nonincreasing, continuous with limx↓cF(x) =∞andF(∞) = 0. DefiningA(t) :=R1

t a(s)ds, we obtain Z 1

t

Z x

eµ((r,1]×(x,y])µ(dr, dy) =

Z A(t)F(x) 0

ey−1 y dy.

(9)

In consequencevsolves the equation

F(v(t)) = d

A(t), (2.15)

whered= 0.8043522. . . is the unique solution of Z d

0

ey−1

y dy= 1. (2.16)

The asymptotic optimal choice probability can be obtained from (2.7)by some calcula- tion as

s1=e−d+ (ed−1−d) Z

d

e−y

y dy= 0.5801642. . .

This is identical to the asymptotic optimal choice probability in the iid case (see (1.3)). In particular in case of the three extreme value distribution typesΛ,Φα, andΨα, one gets for the limiting Poisson processes intensities with densitiesf(t, x)of the form f(t, x) =−F0(y)withF(x) =e−x in caseΛ, F(x) =x−α,x > 0in case Ψα,α > 0, and F(x) = (−x)αforx60,F(x) = 0,x >0in caseΨα,α >0. Thus, these cases fit the form in (2.14). Also the example of a best choice problem forXi=ciYifor some iid sequence dealt with in [KR] (2000c) fits this condition.

3 Poisson processes with finite intensities

Gnedin and Sakaguchi (1992) [13] considered the best choice problem for iid se- quence (Yk)with distribution functions F arriving at Poisson distributed time points.

For continuous F this can be described by a planar Poisson process N with density f(t, y) = a(t)F0(y)wherea : [0,1]→[0,∞] is continuous integrable. N does not fulfill the infinite intensity condition (I) in Section 2. For the best choice problem in Pois- son processes our method of proof can be modified to deal also with the case of finite intensity.

Let N = P

k = δτk,Yk be a Poisson process on[0,1]×(c,∞] with finite Lebesgue continuous intensity measure, i.e.µsatisfies

(If) Finite Intensity Condition: µ([0,1)×(c,∞])<∞.

Then the following modifications of the proof of Theorem 2.1 allow to solve this case.

Note that under condition(If)no longerh(t, c) = 0and thus in general no longer one can find to anyt∈[0,1)anx > csuch thath(t, x)< g(t, x). This property holds true only in[0, t0)with

t0:= sup

t∈[0,1]| Z 1

t

Z c

eµ((r,1]×(c,y])µ(dr, dy)>1

. (3.1)

This can be seen as follows: Fort∈[0, t0)and forxclose tocholds g(t, x)>P(YT˜(t,x)= sup

τk>t

Yk∨x)> h(t, x)

with stopping timeT˜(t, x) := inf{τk > t|Yk > x}. If for somet6∈[0, t0)there would exist somev(t)∈(c,∞)withh(t, v(t)) =g(t, v(t)), thenv(t) =cwould give a contradiction.

So far we have obtained optimality of the stopping timesT(t, x)forx > c. We next consider the casex=c. SinceN has only finitely many points in[0,1]×(c,∞)it follows that

g(t, x) = P(YT(t,x)= sup

τk>t

Yk> x)

−→x↓c P(YT(t,c)= sup

τk>t

Yk > c) =P(YT(t,c)= sup

τk>t

Yk)−h(t, c).

(10)

The inequality sup

T >t N-stopping time

P(YT = sup

τk>t

Yk)

6 sup

τ >t N-stopping time

P(YT ∨x= sup

τk>t

Yk∨x) =g(t, x) +h(t, x)

then implies optimality ofT(t, c)and we obtain the following result.

Theorem 3.1. Let the Poisson process satisfy the finite intensity condition(If). Then the optimal choice stopping time forN is given by

T = inf

τk |Yk= sup

τj∈[0,τk]

Yj, Yk > v(τk) ,

wherev: [0,1]→[c,∞)is a solution of the integral equation Z 1

t

Z v(t)

eµ((r,1]×(v(t),y])µ(dr, dy) = 1 ∀t∈[0, t0), v(t) =c ∀t∈[t0,1].

v is monotonically nonincreasing and can be chosen right continuous. The optimal choice probability is given by

s1 := P(YT = sup

k

Yk, T <1)

= Z 1

0

e−µ([0,r]×(c,∞])Z v(r)

e−µ((r,1]×(y,∞])µ(dr, dy)

+ Z t0

0

Z v(r0) c

Z 1 r0

e−µ([0,r]×(y0,∞])Z y0∨v(r)

e−µ((r,1]×(y,∞])µ(dr, dy)µ(dr0, dy0).

Example 3.2. In case of the finite intensity measureµwith densityf(t, y) =a(t)F0(y) as in Gnedin and Sakaguchi (1992) [13] let F(c) = 0 andA(t) := R1

t a(s)ds. Then we obtain

Z 1 t

Z x

eµ((r,1]×(x,y])µ(dr, dy) =

Z A(t)(1−F(x)) 0

ey−1 y dy.

Thus we gett0= 0andv≡cifA(0)6dwheredis the constant given in(2.16).

IfA(0)> d, thent0is the smallest point satisfyingA(t0) =d, and fort∈[0, to)v(t)is a solution of the equation

F(v(t)) = 1− d A(t),

Some detailed calculations yield in this case the optimal choice probabilitysas

s1=





 e−A(0)

Z A(0) 0

ey−1

y dy, ifA(0)6d,

e−d+ (ed−1−d) Z A(0)

d

e−y

y dy, ifA(0)> d.

This coincides with the results obtained in Gnedin and Sakaguchi (1992) [13].

(11)

4 The optimal m -choice problem

In this section we consider the optimalm-choice problem (1.4) for independent se- quences(Xi). LetXin= Xia−bn

n denote the normalized version as in Section 2. Similarly we consider the optimalm-choice problem for continuous time Poisson processes in the plane defined by

P

m

_

i=1

YTm

i = sup

k

Yk

= sup

06T1<···<Tm61 Ti,N-stopping times

P

m

_

i=1

YTi = sup

k

Yk

=:sm (4.1)

and call(Tim) = (Tin,m)optimalm-choice stopping times. The conditiont 6T1 <· · ·<

Tm61means thatTi−1< Tion{Ti−1<1}andTi= 1on{Ti−1= 1}.

The following lemma gives a characterization of optimalm-choice stopping times in the discrete case.

Lemma 4.1. Define for(t, x)∈[0,1)×[c,∞) gmn(t, x) := sup

tn<T1<···<Tm6n stopping times

P(XTn1∨ · · · ∨XTnm =x∨ max

tn<j6nXjn) and

h1n(t, x) := P( max

tn<j6nXjn 6x), hmn(t, x) := gm−1n (t, x) +h1n(t, x).

Then

gmn(t, x) =P(XTnn,m

1 (t,x)∨ · · · ∨XTnn,m

m (t,x)=x∨ max

tn<j6nXjn) with optimal stopping times given by

T1n,m(t, x) := min{tn < i6n−m+ 1|

hmn(ni, Xin)> gnm(ni, Xin), Xin=x∨ max

tn<j6iXjn}, Tln,m(t, x) := min{Tl−1n,m(t, x)< i6n−m+l|

hm−l+1n (ni, Xin)> gnm−l+1(ni, Xin), Xin=x∨ max

tn<j6iXjn}

(4.2)

for26l6m.

Proof. Lettn < S 6n−m+ 1be a stopping time andZ 6maxtn<j6SXjn be a random variable. Furthermore let for16i6n,Minbeσ(Xi+1n , . . . , Xnn)measurable withMin6 Xi+1n ∨Mi+1n andP(Min =x) = 0for allx > c. In order to maximizeP(Z∨XTn∨MTn= x∨maxtn<j6nXjn)w.r.t. all stopping timesT,S < T 6n−m+ 1we define

Yi :=P(Z∨Xin∨Min =x∨ max

tn<j6nXjn|X1n, . . . , Xin).

Thus we have to maximizeEYT =P(Z∨XTn∨MTn =x∨maxtn<j6nXjn). The optimal

(12)

stopping times are given byTn(t, x) =Tn>tn(t, x)with Tn>k(t, x)

= min{k < i6n−m+ 1|P(Z∨Xin∨Min=x∨ max

tn<j6nXjn|X1n, . . . , Xin)

> P(Z∨XTn>i

n (t,x)∨MTn>i

n (t,x)=x∨ max

tn<j6nXjn|X1n, . . . , Xin), Xin =x∨ max

tn<j6iXjn}

= min{k < i6n−m+ 1|P(Xin∨Min =Xin∨ max

i<j6nXjn|Xin)

> P(XTn>i

n (t,x)∨MTn>i

n (t,x)=Xin∨ max

i<j6nXjn|Xin), Xin=x∨ max

tn<j6iXjn}.

For the second equality we use that P(Xin = Xjn > c) = 0 for i 6= j and thusXin = x∨maxtn<j6iXjn > xis strictly larger than Z. ThusZ can not be the maximum. In consequence we get fork>S

Tn>k(t, x) = min{k < i6n−m+ 1|ˆhn(ni, Xin)>gˆn(ni, Xin), Xin=x∨ max

tn<j6iXjn} (4.3) where

ˆhn(t, x) := P(x∨Mbtncn =x∨ max

tn<j6nXjn)

= P(Mbtncn =x∨ max

tn<j6nXjn) +h1n(t, x), ˆ

gn(t, x) := P(XTn

n(t,x)∨MTn

n(t,x)=x∨ max

tn<j6nXjn).

By induction we obtain from (4.3) the representation in (4.2). Define form= 1: Min:=

−∞, form= 2:Min:=XTn>i

n (t,x), etc.

Theorem 4.2(Approximative solution of the bestm-choice problem). LetNn

d N on Mc = [0,1]×(c,∞]and letN satisfy the intensity condition (I).

a) The optimalm-choice stopping times forN are given byT1m(0, c), . . . , Tmm(0, c), where T1m(t, x) = inf{τk > t|Yk =x∨ sup

τj∈(t,τk]

Yj, Yk> vmk)}, Tlm(t, x) = inf{τk > Tl−1m (t, x)|Yk=x∨ sup

τj∈(t,τk]

Yj, Yk > vm−l+1k)}, for26l6m. The thresholdsvm(t)are solutions of the equations

gm(t, vm(t)) = hm(t, vm(t)) fort∈[0,1) (4.4) vm(1) = c,

with

gm(t, x) :=P(YTm

1 (t,x)∨ · · · ∨YTm

m(t,x)=x∨ sup

t<τk61

Yk) and

h1(t, x) := e−µ((t,1]×(x,∞]) form= 1, hm(t, x) := gm−1(t, x) +h1(t, x) form>2.

vm: [0,1]→[c,∞)is monotonically nonincreasing and can be chosen right continu- ous. Furthermore,vm(t)6vm−1(t)fort∈[0,1] (m>2).

(13)

b) lim

n→∞P(XTn,m

1 ∨ · · · ∨XTmn,m =Mn) =sm=gm(0, c) c) The stopping timesTˆlm= ˆTln,mdefined by

1n,m:= min{16i6n−m+ 1|Xi=Mi, Xi> anvm(ni) +bn},

ln,m:= min{Tˆl−1n,m< i6n−m+l|Xi=Mi, Xi > anvm−l+1(ni) +bn} (4.5) for26l6mare approximative optimalm-choice stopping times, i.e.

n→∞lim P(XTˆ1n,m∨ · · · ∨XTˆmn,m =Mn) =sm. (4.6) Proof. The proof of b), c) is similar to the corresponding part in Theorem 2.1. We therefore concentrate on the proof of a).

As in the proof of Theorem 2.1 we obtain gm(t, x) = sup

t<T1,...,Tm61 N-stopping times

P(YT1∨ · · · ∨YTm =x∨ sup

t<τk61

Yk). (4.7)

Furthermore we note thatvmis continuous in 1,limt↑1vm(t) =c. If not, thenvm(tn)→ d > cfor some sequencetn↑1and thusgm(tn, vm(tn))→0. The inequalityhm(tn, vm(tn))

>h1(tn, vm(tn))→e−0= 1>0then yields a contradiction.

We have to show that for m > 2, vm 6 vm−1. Using the characterization of vm in (4.4) this inequality follows from

gm(t, x)−hm(t, x)6gm−1(t, x)−hm−1 (t, x). (4.8) forx=vm(t). Form= 2we obtain from (4.7)

g2(t, x)62g1 (t, x), which implies (4.8). Form>3(4.8) is equivalent to

gm(t, x)−gm−1 (t, x)6gm−1(t, x)−gm−2(t, x). (4.9) We prove (4.9) by induction. By induction hypothesis we assume thatvm−16vm−26 . . .6v1and all of theviare monotonically nonincreasing. Also we havelimt↑1vm(t) =c. Assume thatvm(s)> vm−1(s)for somes∈[0,1). Sincelimt↑1vm(t) = limt↑1vm−1(t) =c and sincevm−1is monotonically nonincreasing, there exists somet∈[s,1)withvm(t)>

vm−1(t)andvm(t)>vm(r)for allr>t. We establish (4.9) forx=vm(t).

From the definition of the thresholds followsTkm(t, x)6Tkm−1(t, x)6Tkm−2(t, x)for all16k6m−2. We define the stopping time

Tˆ(t, x) := T1m(t, x)χ{Tm

1 (t,x)<T1m−2(t,x)}

+

m−2

X

i=2

Tim(t, x)χ{Tm

1 (t,x)=T1m−2(t,x)}∩···∩{Ti−1m (t,x)=Ti−1m−2(t,x)}∩{Tim(t,x)<Tim−2(t,x)}

+Tm−1m (t, x)χ{Tm

1 (t,x)=T1m−2(t,x)}∩···∩{Tm−2m (t,x)=Tm−2m−2(t,x)}.

Tˆ(t, x)only stops at time pointsT1m(t, x), . . . , Tmm(t, x)but not at time pointsT1m−2(t, x),

(14)

. . . , Tm−2m−2(t, x)when these are<1. We now obtain from (4.7) the inequality gm(t, x)−gm−1 (t, x)

6 P(YTm

1 (t,x)∨ · · · ∨YTm

m(t,x)=x∨ sup

t<τk61

Yk)

−P(YTm−2

1 (t,x)∨ · · · ∨YTm−2

m−2(t,x)=x∨ sup

t<τk61

Yk)−P(YTˆ(t,x)=x∨ sup

t<τk61

Yk)

= P(YT1m(t,x)∨ · · · ∨YTmm(t,x)=x∨ sup

t<τk61

Yk, YT(t,x)ˆ < x∨ sup

t<τk61

Yk)−gm−2(t, x) 6 P(YT1(t,x)∨ · · · ∨YTm−1(t,x)=x∨ sup

t<τk61

Yk)−gm−2 (t, x) 6 gm−1(t, x)−gm−2(t, x),

with the stopping times

T1(t, x) := T1m(t, x)χ{Tm

1 (t,x)6= ˆT(t,x)}+T2m(t, x)χ{Tm

1 (t,x)= ˆT(t,x)}

Ti(t, x) := Tim(t, x)χ{Tm

1 (t,x)6= ˆT(t,x)}∩···∩{Tim(t,x)6= ˆT(t,x)}

+Ti+1m (t, x)χ{Tm

1 (t,x)= ˆT(t,x)}∪···∪{Tim(t,x)= ˆT(t,x)}

fori= 2, . . . , m−1.

Thus (4.9) holds true forx=vm(t)and, therefore,vm(t)6vm−1(t), a contradiction.

This implies the statementvm6vm−1.

In the final step we prove thatvm is monotonically nonincreasing. Sincegm(t, x)− hm(t, x)is monotonically nonincreasing inxfor anyt, it suffices to prove that fors < t

gm(s, vm(t))−hm(s, vm(t))>0. (4.10) We define for(s, x)∈[0,1)×(c,∞]

A(s,x) := {YTm

1 (s,x)∨ · · · ∨YTm

m(s,x)=x∨ sup

s<τk61

Yk, YTm−1 1 (s,x)

∨ · · · ∨YTm−1

m−1(s,x)< x∨ sup

s<τk61

Yk}.

Then

gm(s, x)−gm−1(s, x) =P(A(s,x)) and fors < t

P(A(t,x))−P(A(s,x))

= P(A(s,x), N((s, t]×(x,∞]) = 0) +P(A(t,x))P(N((s, t]×(x,∞])>1)

−P(A(s,x))

= −P(A(s,x), N((s, t]×(x,∞])>1) +P(A(t,x))(1−e−µ((s,t]×(x,∞])) 6 P(A(t,x))

h1(t, x)(h1(t, x)−h1(s, x)).

Withx=vm(t)we obtain the inequality in (4.10) and thus monotonicity ofvm. To calculate the optimal thresholds we need to calculate the densities of record stopping times with general thresholdv.

Lemma 4.3. Let v : [0,1] → [c,∞) be monotonically nonincreasing, right continuous withv > con[0,1). Define the record stopping time associated tov for(t, x)∈[0,1)× (c,∞)by

T(t, x) := inf{τk> t|Yk=x∨ sup

τj∈(t,τk]

Yj, Yk > v(τk)},

参照

関連したドキュメント

From a theoretical point of view, an advantage resulting from the addition of the diffuse area compared to the sharp interface approximation is that the system now has a

(4) The basin of attraction for each exponential attractor is the entire phase space, and in demonstrating this result we see that the semigroup of solution operators also admits

Keywords: continuous time random walk, Brownian motion, collision time, skew Young tableaux, tandem queue.. AMS 2000 Subject Classification: Primary:

A new method is suggested for obtaining the exact and numerical solutions of the initial-boundary value problem for a nonlinear parabolic type equation in the domain with the

This paper develops a recursion formula for the conditional moments of the area under the absolute value of Brownian bridge given the local time at 0.. The method of power series

It turns out that the symbol which is defined in a probabilistic way coincides with the analytic (in the sense of pseudo-differential operators) symbol for the class of Feller

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A

discrete ill-posed problems, Krylov projection methods, Tikhonov regularization, Lanczos bidiago- nalization, nonsymmetric Lanczos process, Arnoldi algorithm, discrepancy