• 検索結果がありません。

ThierryKLEIN YutaoMA NicolasPRIVAULT Convexconcentrationinequalitiesandforward-backwardstochasticcalculus

N/A
N/A
Protected

Academic year: 2022

シェア "ThierryKLEIN YutaoMA NicolasPRIVAULT Convexconcentrationinequalitiesandforward-backwardstochasticcalculus"

Copied!
27
0
0

読み込み中.... (全文を見る)

全文

(1)

El e c t ro nic

Jo urn a l o f

Pr

ob a b i l i t y

Vol. 11 (2006), Paper no. 20, pages 486–512.

Journal URL

http://www.math.washington.edu/~ejpecp/

Convex concentration inequalities and forward-backward stochastic calculus

Thierry KLEINa) Yutao MAb)c) Nicolas PRIVAULTc)

Abstract

Given (Mt)t∈R+and (Mt)t∈R+respectively a forward and a backward martingale with jumps and continuous parts, we prove thatE[φ(Mt+Mt)] is non-increasing intwhenφis a convex function, provided the local characteristics of (Mt)t∈R+ and (Mt)t∈R+ satisfy some com- parison inequalities. We deduce convex concentration inequalities and deviation bounds for random variables admitting a predictable representation in terms of a Brownian motion and a non-necessarily independent jump component

Key words: Convex concentration inequalities, forward-backward stochastic calculus, de- viation inequalities, Clark formula, Brownian motion, jump processes

AMS 2000 Subject Classification: Primary 60F99, 60F10, 60H07, 39B62.

Submitted to EJP on January 10 2005, final version accepted June 8 2006.

a) Institut de Math´ematiques - L.S.P., Universit´e Paul Sabatier, 31062 Toulouse, France.

tklein@cict.fr

b) School of Mathematics and Statistics, Wuhan University, 430072 Hubei, P.R. China.

yutao.ma@univ-lr.fr

c) epartement de Math´ematiques, Universit´e de La Rochelle, 17042 La Rochelle, France.

nicolas.privault@univ-lr.fr

(2)

1 Introduction

Two random variablesF and Gsatisfy a convex concentration inequality if

E[φ(F)]≤E[φ(G)] (1.1)

for all convex functions φ : R → R. By a classical argument, the application of (1.1) to φ(x) = exp(λx),λ >0, entails the deviation bound

P(F ≥x)≤ inf

λ>0E[eλ(F−x)1{F≥x}]≤ inf

λ>0E[eλ(F−x)]≤ inf

λ>0E[eλ(G−x)], (1.2) x >0, hence the deviation probabilities forF can be estimated via the Laplace transform of G, see [2], [3], [15] for more results on this topic. In particular, ifGis Gaussian then Theorem 3.11 of [15] shows moreover that

P(F ≥x)≤ e2

2P(G≥x), x >0.

On the other hand, ifF is more convex concentrated thanGthenE[F] =E[G] as follows from taking successivelyφ(x) =x and φ(x) =−x, and applying the convex concentration inequality toφ(x) =xlogx we get

Ent[F] = E[FlogF]−E[F] logE[F]

= E[FlogF]−E[G] logE[G]

≤ E[GlogG]−E[G] logE[G]

= Ent[G],

hence a logarithmic Sobolev inequality of the form Ent[G]≤ E(G, G) implies Ent[F]≤ E(G, G).

In this paper we obtain convex concentration inequalities for the sum Mt+Mt, t ∈ R+, of a forward and a backward martingale with jumps and continuous parts. Namely we prove that Mt+Mt is more concentrated than Ms+Ms ift≥s≥0, i.e.

E[φ(Mt+Mt)]≤E[φ(Ms+Ms)], 0≤s≤t,

for all convex functionsφ:R→R, provided the local characteristics of (Mt)t∈R+ and (Mt)t∈R+

satisfy the comparison inequalities assumed in Theorem 3.2 below. If further E[Mt|FtM] = 0, t∈R+, where (FtM)t∈R+ denotes the filtration generated by (Mt)t∈R+, then Jensen’s inequality yields

E[φ(Mt)]≤E[φ(Ms+Ms)], 0≤s≤t, and if in addition we haveM0= 0, then

E[φ(MT)]≤E[φ(M0)], T ≥0. (1.3)

In other terms, we will show that a random variableF is more concentrated thanM0: E[φ(F−E[F])]≤E[φ(M0)],

(3)

provided certain assumptions are made on the processes appearing in the predictable represen- tation of F−E[F] =MT in terms of a point process and a Brownian motion.

Consider for example a random variableF represented as F =E[F] +

Z +∞

0

HtdWt+ Z +∞

0

Jt(dZt−λtdt),

where (Zt)t∈R+ is a point process with compensator (λt)t∈R+, (Wt)t∈R+ is a standard Brownian motion, and (Ht)t∈R+, (Jt)t∈R+ are predictable square-integrable processes satisfying Jt ≤ k, dP dt-a.e., and

Z +∞

0

|Ht|2dt≤β2, and

Z +∞

0

|Jt|2λtdt≤α2, P−a.s.

By applying (1.3) or Theorem 4.1−ii) below to forward and backward martingales of the form Mt=E[F] +

Z t 0

HudWu+ Z t

0

Ju(dZu−λudu), t∈R+, and

Mt= ˆWβ2−WˆV2(t)+k( ˆNα2/k2 −NˆU2(t)/k2)−(α2−U2(t))/k, t∈R+,

where ( ˆWt)t∈R+, ( ˆNt)t∈R+, are a Brownian motion and a left-continuous standard Poisson pro- cess,β ≥0,α≥0, k >0, and (V(t))t∈R+, (U(t))t∈R+ are suitable random time changes, it will follow in particular thatF is more concentrated than

M0 = ˆWβ2 +kNˆα2/k2 −α2/k, i.e.

E[φ(F−E[F])]≤E h

φ( ˆWβ2 +kNˆα2/k2−α2/k)i

(1.4) for all convex functions φsuch thatφ0 is convex.

From (1.2) and (1.4) we get

P(F−E[F]≥x)≤ inf

λ>0exp α2

k2(eλk−λk−1) + β2λ2 2 −λx

, i.e.

P(F −E[F]≥x)≤exp x

k− β2λ0(x)

2k (2−kλ0(x))−(x+α2/k)λ0(x)

, whereλ0(x)>0 is the unique solution of

e0(x)+kλ0(x)β2

α2 −1 = kx α2.

WhenHt= 0,t∈R+, we can take β= 0, then λ0(x) =k−1log(1 +xk/α2) and this implies the Poisson tail estimate

P(F −E[F]≥y)≤exp y

k − y

k+α2 k2

log

1 +ky

α2

, y >0. (1.5)

(4)

Such an inequality has been proved in [1], [19], using (modified) logarithmic Sobolev inequalities and the Herbst method whenZt=Nt,t∈R+, is a Poisson process, under different hypotheses on the predictable representation ofF via the Clark formula, cf. Section (6). WhenJtt= 0, t∈R+, we recover classical Gaussian estimates which can be independently obtained from the expression of continuous martingales as time-changed Brownian motions.

We proceed as follows. In Section3we present convex concentration inequalities for martingales.

In Sections 4 and 5 these results are applied to derive convex concentration inequalities with respect to Gaussian and Poisson distributions. In Section 6we consider the case of predictable representations obtained from the Clark formula. The proofs of the main results are formulated using forward/backward stochastic calculus and arguments of [10]. Section7 deals with an ap- plication to normal martingales, and in the appendix (Section8) we prove the forward-backward Itˆo type change of variable formula which is used in the proof of our convex concentration in- equalities. See [4] for a reference where forward Itˆo calculus with respect to Brownian motion has been used for the proof of logarithmic Sobolev inequalities on path spaces.

2 Notation

Let (Ω,F, P) be a probability space equipped with an increasing filtration (Ft)t∈R+ and a de- creasing filtration (Ft)t∈R+. Consider (Mt)t∈R+ an Ft-forward martingale and (Mt)t∈R+ an Ft-backward martingale. We assume that (Mt)t∈R+ has right-continuous paths with left limits, and that (Mt)t∈R+ has left-continuous paths with right limits. Denote respectively by (Mtc)t∈R+

and (Mt∗c)t∈R+ the continuous parts of (Mt)t∈R+ and (Mt)t∈R+, and by

∆Mt=Mt−Mt, ∆Mt=Mt−Mt+,

their forward and backward jumps. The processes (Mt)t∈R+ and (Mt)t∈R+ have jump measures µ(dt, dx) =X

s>0

1{∆Ms6=0}δ(s,∆Ms)(dt, dx), and

µ(dt, dx) =X

s>0

1{∆M

s6=0}δ(s,∆M

s)(dt, dx),

where δ(s,x) denotes the Dirac measure at (s, x) ∈R+×R. Denote by ν(dt, dx) and ν(dt, dx) the (Ft)t∈R+ and (Ft)t∈R+-dual predictable projections ofµ(dt, dx) and µ(dt, dx), i.e.

Z t 0

Z

−∞

f(s, x)(µ(ds, dx)−ν(ds, dx)) and Z

t

Z

−∞

g(s, x)(µ(ds, dx)−ν(ds, dx)) are respectivelyFt-forward andFt-backward local martingales for all sufficiently integrableFt- predictable, resp. Ft-predictable, process f, resp. g. The quadratic variations ([M, M])t∈R+, ([M, M])t∈R+ are defined as the limits in uniform convergence in probability

[M, M]t= lim

n→∞

n

X

i=1

|Mtn

i −Mtni−1|2,

(5)

and

[M, M]t= lim

n→∞

n−1

X

i=0

|Mtn i −Mtn

i+1|2, for all refining sequences {0 =tn0 ≤tn1 ≤ · · · ≤tnk

n =t},n≥1, of partitions of [0, t] tending to the identity. We then letMtd=Mt−Mtc,Mt∗d=Mt−Mt∗c,

[Md, Md]t= X

0<s≤t

|∆Ms|2, [M∗d, M∗d]t= X

0≤s<t

|∆Ms|2, and

hMc, Mcit= [M, M]t−[Md, Md]t, hM∗c, M∗cit= [M, M]t−[M∗d, M∗d]t,

t ∈ R+. Note that ([M, M]t)t∈R+, (hM, Mit)t∈R+, ([M, M]t)t∈R+ and (hM, Mit)t∈R+ are Ft-adapted, but ([M, M]t)t∈R+ and (hM, Mit)t∈R+ are notFt-adapted. The pairs

(ν(dt, dx),hMc, Mci) and (ν(dt, dx),hM∗c, M∗ci)

are called the local characteristics of (Mt)t∈R+, cf. [8] in the forward case. Denote by (hMd, Mdit)t∈R+, (hM∗d, M∗dit)t∈R+ the conditional quadratic variations of (Mtd)t∈R+, (Mt∗d)t∈R+, with

dhMd, Mdit= Z

R

|x|2ν(dt, dx) and dhM∗d, M∗dit= Z

R

|x|2ν(dt, dx).

The conditional quadratic variations (hM, Mit)t∈R+, (hM, Mit)t∈R+ of (Mt)t∈R+ and (Mt)t∈R+

satisfy

hM, Mit=hMc, Mcit+hMd, Mdit, and hM, Mit=hM∗c, M∗cit+hM∗d, Md∗it, t ∈ R+. In the sequel, given η, resp. η, a forward, resp. backward, adapted and sufficiently integrable process, the notation Rt

0ηudMu, resp. R

t ηudMu, will respectively denote the right, resp. left, continuous version of the indefinite stochastic integral, i.e. we have

Z t 0

ηudMu= Z t+

0

ηudMu and Z

t

ηudMu= Z

t

ηudMu, t∈R+, dP −a.e.

3 Convex concentration inequalities for martingales

In the sequel we assume that

(Mt)t∈R+ is anFt-adapted, Ft-forward martingale, (3.1) and

(Mt)t∈R+ is anFt-adapted, Ft-backward martingale, (3.2) whose characteristics have the form

ν(du, dx) =νu(dx)du and ν(du, dx) =νu(dx)du, (3.3)

(6)

and

dhMc, Mcit=|Ht|2dt, and dhM∗c, M∗cit=|Ht|2dt, (3.4) where (Ht)t∈R+, (Ht)t∈R+, are respectively predictable with respect to (Ft)t∈R+ and (Ft)t∈R+. Hypotheses (3.1) and (3.2) may seem artificial but they are actually crucial to the proofs of our main results. Indeed, Theorem3.2 and Theorem3.3 are based on a forward/backward Itˆo type change of variable formula (Theorem8.1below) for (Mt, Mt)t∈R+, in which (3.1) and (3.2) are needed in order to make sense of the integrals

Z t s+

φ0(Mu+Mu)dMu and

Z t s

φ0(Mu+Mu+)dMu.

Note that in our main applications (see Sections4,5,6 and7), these hypotheses are fulfilled by construction ofFt andFt.

Recall the following Lemma.

Lemma 3.1. Let m1, m2 be two measures on R such that m1([x,∞)) ≤ m2([x,∞)) < ∞, x∈R. Then for all non-decreasing and m1, m2-integrable functionf onR we have

Z

−∞

f(x)m1(dx)≤ Z

−∞

f(x)m2(dx).

Ifm1,m2are probability measures then the above property corresponds to stochastic domination for random variables of respective lawsm1,m2.

Theorem 3.2. Let

¯

νu(dx) =xνu(dx), ν¯u(dx) =xνu(dx), u∈R+, and assume that:

i) ν¯u([x,∞))≤ν¯u([x,∞))<∞, x, u∈R, and ii) |Hu| ≤ |Hu|, dP du−a.e.

Then we have:

E[φ(Mt+Mt)]≤E[φ(Ms+Ms)], 0≤s≤t, (3.5) for all convex functions φ:R→R.

Next is a different version of the same result, underL2 hypotheses.

(7)

Theorem 3.3. Let

˜

νu(dx) =|x|2νu(dx) +|Hu|2δ0(dx), ˜νu(dx) =|x|2νu(dx) +|Hu|2δ0(dx), u∈R+, and assume that:

˜

νu([x,∞))≤ν˜u([x,∞))<∞, x∈R, u∈R+. (3.6) Then we have:

E[φ(Mt+Mt)]≤E[φ(Ms+Ms)], 0≤s≤t, (3.7) for all convex functions φ:R→Rsuch that φ0 is convex.

Remark 3.4. Note that in both theorems,(Mt)t≥0 and (Mt)t≥0 do not have to be independent.

In the proof we may assume thatφisC2 since a convexφcan be approximated by an increasing sequence of C2 convex Lipschitz functions, and the results can then be extended to the general case by an application of the monotone convergence theorem. In order to prove Theorem 3.2 and Theorem3.3, we apply Itˆo0s formula for forward/backward martingales (Theorem8.1in the Appendix Section8), tof(x1, x2) =φ(x1+x2):

φ(Mt+Mt) =φ(Ms+Ms) +

Z t s+

φ0(Mu+Mu)dMu+1 2

Z t s

φ00(Mu+Mu)dhMc, Mciu

+ X

s<u≤t

φ(Mu+Mu+ ∆Mu)−φ(Mu+Mu)−∆Muφ0(Mu+Mu)

− Z t

s

φ0(Mu+Mu+)dMu−1 2

Z t s

φ00(Mu+Mu)dhM∗c, M∗ciu

− X

s≤u<t

φ(Mu+Mu++ ∆Mu)−φ(Mu+Mu+)−∆Muφ0(Mu+Mu+) ,

0≤s≤t, where dandd denote the forward and backward Itˆo differential, respectively defined as the limits of the Riemann sums

n

X

i=1

(Mtn

i −Mtni−10(Mtni−1+Mtn

i−1)

and n−1

X

i=0

(Mtn

i −Mtn

i+10(Mtn

i+1+Mtn

i+1)

for all refining sequences{s=tn0 ≤tn1 ≤ · · · ≤tnkn =t},n≥1, of partitions of [s, t] tending to the identity.

Proof of Theorem3.2. Taking expectations on both sides of the above Itˆo formula we get E[φ(Mt+Mt)] =E[φ(Ms+Ms)] +1

2E Z t

s

φ00(Mu+Mu)d(hMc, Mciu− hM∗c, M∗ciu)

(8)

+E Z t

s

Z +∞

−∞

(φ(Mu+Mu+x)−φ(Mu+Mu)−xφ0(Mu+Mu))νu(dx)du

−E Z t

s

Z +∞

−∞

(φ(Mu+Mu+x)−φ(Mu+Mu)−xφ0(Mu+Mu))νu(dx)du

= E[φ(Ms+Ms)] + 1 2E

Z t s

φ00(Mu+Mu)(|Hu|2− |Hu|2)du

+E Z t

s

Z +∞

−∞

ϕ(x, Mu+Mu)(¯νu(dx)−ν¯u(dx))du

, where

ϕ(x, y) = φ(x+y)−φ(y)−xφ0(y)

x , x, y∈R.

The conclusion follows from the hypotheses and the fact that since φ is convex, the function x7→ϕ(x, y) is increasing in x∈Rfor all y∈R. Proof of Theorem 3.3. Using the following version of Taylor’s formula

φ(y+x) =φ(y) +xφ0(y) +|x|2 Z 1

0

(1−τ)φ00(y+τ x)dτ, x, y∈R, which is valid for allC2 functions φ, we get

E[φ(Mt+Mt)] =E[φ(Ms+Ms)]

+1 2E

Z t s

φ00(Mu+Mu)(|Hu|2− |Hu|2)du

+E Z t

s

Z +∞

−∞

|x|2 Z 1

0

(1−τ)φ00(Mu+Mu+τ x)dτ νu(dx)du

−E Z t

s

Z +∞

−∞

|x|2 Z 1

0

(1−τ)φ00(Mu+Mu+τ x)dτ νu(dx)du

= E[φ(Ms+Ms)]

+E Z 1

0

(1−τ) Z t

s

Z +∞

−∞

φ00(Mu+Mu+τ x)(˜νu(dx)−ν˜u(dx))dudτ

,

and the conclusion follows from the hypothesis and the fact thatφ is convex implies thatφ00 is

non-decreasing.

Note that if φ is C2 and φ00 is also convex, then it suffices to assume that ˜νu is more convex concentrated than ˜νu instead of hypothesis (3.6) in Theorem3.3.

Remark 3.5. In case |Ht|=|Ht| and νtt, dP dt-a.e., from the proof of Theorem 3.2 and Theorem 3.3we get the identity

E[φ(Mt+Mt)] =E[φ(Ms+Ms)], 0≤s≤t, (3.8) for all sufficiently integrable functions φ:R→R.

(9)

In particular, Relation (3.8) extends its natural counterpart in the independent increment case:

given (Zs)s∈[0,t], ( ˜Zs)s∈[0,t] two independent copies of a L´evy process without drift, define the backward martingale (Zs)s∈[0,t] as Zs = ˜Zt−s, s ∈ [0, t], then by convolution E[φ(Zs+Zs)] = E[φ(Zt)] does clearly not depend ons∈[0, t].

Remark 3.6. If φ is non-decreasing, the proofs and statements of Theorem 3.2, Theorem 3.3, Corollary3.9 and Corollary3.8 extend to semi-martingales( ˆMt)t∈R+,( ˆMt)t∈R+ represented as

t=Mt+ Z t

0

αsds and Mˆt =Mt+ Z +∞

t

βsds, (3.9)

provided(αt)t∈R+,(βt)t∈R+, are respectively Ft and Ft-adapted with αt≤βt, dP dt-a.e.

Let now (FtM)t∈R+, resp. (FtM)t∈R+, denote the forward, resp. backward, filtration generated by (Mt)t∈R+, resp. (Mt)t∈R+.

Corollary 3.7. Under the hypothesis of Theorem 3.2, if furtherE[Mt|FtM] = 0, t∈R+, then E[φ(Mt)]≤E[φ(Ms+Ms)], 0≤s≤t. (3.10) Proof. From (3.19) we get

E[φ(Ms+Ms)] ≥ E[φ(Mt+Mt)]

= E E

φ(Mt+Mt)|FtM

≥ E

φ Mt+E[Mt|FtM]

= E[φ(Mt)],

0≤s≤t, where we used Jensen’s inequality.

In particular, if M0=E[Mt] is deterministic (orF0M is the trivial σ-field), Corollary3.7 shows thatMt−E[Mt] is more concentrated than M0:

E[φ(Mt−E[Mt])]≤E[φ(M0)], t≥0.

The filtrations (Ft)t∈R+ and (Ft)t∈R+ considered in Theorem 3.2 can be taken as Ft=F0M∨ FtM,Ft =FM∨ FtM,t∈R+, provided (Mt)t∈R+ and (Mt)t∈R+ are independent.

In this case, if additionally we have MT = 0, then E[Mt|FtM] = E[Mt] = E[MT] = 0, 0≤t≤T, hence the hypothesis of Corollary 3.7is also satisfied. However the independence of hM, Mit with hM, Mit, t∈ R+, is not compatible (except in particular situations) with the assumptions imposed in Theorem3.2.

In applications to convex concentration inequalities between random variables (admitting a predictable representation) and Poisson or Gaussian random variables, the independence of (Mt)t∈R+ with (Mt)t∈R+ will not be required, see Sections4 and 5.

(10)

The case of bounded jumps

Assume now thatν(dt, dx) has the form

ν(dt, dx) =λtδkdt, (3.11)

wherek∈R+ and (λt)t∈R+ is a positive Ft-predictable process. Let λ1,t=

Z +∞

−∞

t(dx), λ22,t = Z +∞

−∞

|x|2νt(dx), t∈R+,

denote respectively the compensator and quadratic variation of the jump part of (Mt)t∈R+, under the respective assumptions

Z +∞

−∞

|x|νt(dx)<∞, and

Z +∞

−∞

|x|2νt(dx)<∞, (3.12) t∈R+,P-a.s.

Corollary 3.8. Assume that(Mt)t∈R+ and(Mt)t∈R+ have jump characteristics satisfying(3.11) and (3.12), that (Mt)t∈R+ isFt-adapted, and that (Mt)t∈R+ is Ft-adapted. Then we have:

E[φ(Mt+Mt)]≤E[φ(Ms+Ms)], 0≤s≤t, (3.13) for all convex functions φ:R→R, provided any of the three following conditions is satisfied:

i) 0≤∆Mt≤k, dP dt−a.e., and

|Ht| ≤ |Ht|, λ1,t ≤kλt, dP dt−a.e., ii) ∆Mt≤k, dP dt−a.e., and

|Ht| ≤ |Ht|, λ22,t≤k2λt, dP dt−a.e., iii) ∆Mt≤0, dP dt−a.e., and

|Ht|222,t ≤ |Ht|2+k2λt, dP dt−a.e., with moreover φ0 convex in casesii) andiii).

Proof. The conditions 0 ≤ ∆Mt ≤ k, ∆Mt ≤ k, ∆Mt ≤ 0, are respectively equivalent to νt([0, k]c) = 0, νt((k,∞)) = 0,νt((0,∞)) = 0, hence under condition (i), the result follows from Theorem3.2-i), and under conditions (ii)−(iii) it is an application of Theorem3.2-ii).

For example we may take (Mt)t∈R+ and (Mt)t∈R+ of the form Mt=M0+

Z t 0

HsdWs+ Z t

0

Z +∞

−∞

x(µ(ds, dx)−νs(dx)ds), t≥0, (3.14) where (Wt)t∈R+ is a standard Brownian motion, and

Mt = Z +∞

t

HsdWs+k

Zt− Z +∞

t

λsds

, (3.15)

where (Wt)t∈R+ is a backward Brownian motion and (Zt)t∈R+ is a backward point process with intensity (λt)t∈R+. However in Section5we will consider an example for which the decomposition (3.15) does not hold.

(11)

The case of point processes

In particular, (Mt)t∈R+ and (Mt)t∈R+ can be taken as Mt=M0+

Z t 0

HsdWs+ Z t

0

Js(dZs−λsds), t∈R+, (3.16) and

Mt = Z +∞

t

HsdWs+ Z +∞

t

Js(dZs−λsds), t∈R+, (3.17) where (Wt)t∈R+ is a standard Brownian motion, (Zt)t∈R+ is a point process with intensity (λt)t∈R+, (Wt)t∈R+ is a backward standard Brownian motion, and (Zt)t∈R+ is a backward point process with intensity (λt)t∈R+, and (Ht)t∈R+, (Jt)t∈R+, resp. (Ht)t∈R+, (Jt)t∈R+ are predictable with respect to (Ft)t∈R+, resp. (Ft)t∈R+.

In this case, taking

ν(dt, dx) =νt(dx) =λtδJt(dx)dt and ν(dt, dx) =νt(dx) =λtδJt(dx)dt (3.18) in Theorem3.3yields the following corollary.

Corollary 3.9. Let (Mt)t∈R+, (Mt)t∈R+ have the jump characteristics (3.18) and assume that (Mt)t∈R+ is Ft-adapted and (Mt)t∈R+ is Ft-adapted. Then we have:

E[φ(Mt+Mt)]≤E[φ(Ms+Ms)], 0≤s≤t, (3.19) for all convex functions φ:R→R, provided any of the three following conditions are satisfied:

i) 0≤Jt≤JttdP dt−a.e. and

|Ht| ≤ |Ht|, λtJt≤λtJt, dP dt−a.e., ii) Jt≤JttdP dt−a.e., and

|Ht| ≤ |Ht|, λt|Jt|2 ≤λt|Jt|2, dP dt−a.e..

iii) Jt≤0≤JttdP dt−a.e., and

|Ht|2t|Jt|2≤ |Ht|2t|Jt|2, dP dt−a.e..

with moreover φ0 convex in casesii) andiii).

Note that condition i) in Corollary3.9 can be replaced with the stronger condition:

i’) 0≤Jt≤JttdP dt−a.e. and

|Ht| ≤ |Ht|, λt≤λt, dP dt−a.e.

(12)

4 Application to point processes

Let (Wt)t∈R+ and (Zt)t∈R+ be a standard Brownian motion and a point process, generating a filtration (FtM)t∈R+. We will assume that (Wt)t∈R+ is also an FtM-Brownian motion and that (Zt)t∈R+ has compensator (λt)t∈R+ with respect to (FtM)t∈R+, which does not in general require the independence of (Wt)t∈R+ from (Zt)t∈R+. Consider F a random variable with the representation

F =E[F] + Z +∞

0

HtdWt+ Z +∞

0

Jt(dZt−λtdt), (4.1) where (Hu)u∈R+ is a square-integrable FtM-predictable process and (Jt)t∈R+ is an FtM- predictable process which is either square-integrable or positive and integrable. Theorem 4.1is a consequence of Corollary3.9above, and shows that the possible dependence of (Wt)t∈R+ from (Zt)t∈R+ can be decoupled in terms of independent Gaussian and Poisson random variables.

Note that inequality (4.2) below is weaker than (4.3) but it holds for a wider class of functions, i.e. for all convex functions instead of all convex functions having a convex derivative.

Theorem 4.1. Let F have the representation (4.1):

F =E[F] + Z +∞

0

HtdWt+ Z +∞

0

Jt(dZt−λtdt),

and letN˜(c), W(β2)be independent random variables with compensated Poisson law of intensity c >0 and centered Gaussian law with varianceβ2≥0, respectively.

i) Assume that 0≤Jt≤k, dP dt-a.e., for somek >0, and let β12 =

Z +∞

0

|Ht|2dt

and α1 =

Z +∞

0

Jtλtdt

. Then we have

E[φ(F −E[F])]≤E h

φ

W(β21) +kN˜(α1/k) i

, (4.2)

for all convex functions φ:R→R.

ii) Assume that Jt≤k, dP dt-a.e., for somek >0, and let β22 =

Z +∞

0

|Ht|2dt

and α22 =

Z +∞

0

|Jt|2λtdt

. Then we have

E[φ(F−E[F])]≤E h

φ

W(β22) +kN˜(α22/k2)i

, (4.3)

for all convex functions φ:R→Rsuch that φ0 is convex.

iii) Assume that Jt≤0, dP dt-a.e., and let β23 =

Z +∞

0

|Ht|2dt+ Z +∞

0

|Jt|2λtdt

. Then we have

E[φ(F−E[F])]≤E

φ(W(β32))

, (4.4)

for all convex functions φ:R→Rsuch that φ0 is convex.

(13)

Proof. Consider the FtM-martingale Mt=E[F|FtM]−E[F] =

Z t 0

HsdWs+ Z t

0

Js(dZs−λsds), t≥0,

and let ( ˆNs)s∈R+, ( ˆWs)s∈R+ respectively denote a left-continuous standard Poisson process and a standard Brownian motion which are assumed to be mutually independent, and also independent of (FsM)s∈R+.

i)−ii) Forp= 1,2, let the filtrations (Ft)t∈R+ and (Ft)t∈R+ be defined by Ft =FM ∨σ( ˆWβ2

p −WˆV2

p(s),Nˆαpp/kp−NˆUpp(s)/kp : s≥t}, and Ft=σ( ˆWs, Nˆs : s≥0)∨ FtM,t∈R+, and let

Mt = ˆWβ2

p−WˆV2

p(t)+k( ˆNαpp/kp−NˆUpp(t)/kp)−(αpp−Upp(t))/kp−1, (4.5) where

Vp2(t) = Z t

0

|Hs|2ds and Upp(t) = Z t

0

Jspλsds, P−a.s., s≥0.

Then (Mt)t∈R+ satisfies the hypothesis of Corollary 3.9−i) −ii), as well as the condition E[Mt|FtM] = 0, t∈R+, withHs =Hs,Js =k,λs=Jspλs/kp,dP ds-a.e., hence

E[φ(Mt)]≤E[φ(M0)],

and lettingt go to infinity we obtain (4.2) and (4.3), respectively forp= 1 and p= 2.

iii) Let

Ms =Wβ2

3 −WU2

3(s), (4.6)

where

U32(s) = Z s

0

|Hu|2du+ Z s

0

|Ju|2λudu, P −a.s.

Then (Mt)t∈R+ satisfies the hypothesis of Corollary 3.9−iii) with |Hs|2 =|Hs|2+|Js|2λs and λs =Js = 0,dP ds-a.e., hence

E[φ(Mt)]≤E[φ(M0)],

and lettingt go to infinity we obtain (4.4).

Remark 4.2. The proof of Theorem 4.1can also be obtained from Corollary 3.8.

Proof. Let

µ(dt, dx) = X

∆Zs6=0

δ(s,Js)(dt, dx), νt(dx) =λtδJt(dx).

i)−ii) In both cases p = 1,2, let (Ft)t∈R+, (Ft)t∈R+ and (Mt)t∈R+ be defined in (4.5), with Vp2(t) =Rt

0 |Hs|2dsandUpp(t) =Rt

0|Js|pds,P-a.s.,t≥0. Then (Mt)t∈R+ satisfies the hypothesis of Corollary 3.8−i)−ii), withHs =Hss =|Js|p/kp,dP ds-a.e.

iii) Let (Ms)s∈R+ be defined as in (4.6), and let U32(s) = Rs

0 |Hu|2du+Rs

0 |Ju|2du, |Hs|2 =

|Hs|2+|Js|2 andνs= 0, dP ds-a.e.

(14)

In the pure jump case, Theorem4.1-ii) yields P(MT ≥x)≤exp

y k −

y k+ α22

k2

log

1 +ky α22

≤exp

− y 2klog

1 +ky

α22

, y > 0, with α22 = khM, MiTk, cf. Theorem 23.17 of [9], although some differences in the hypotheses make the results not directly comparable: here no lower bound is assumed on jump sizes, and the presence of a continuous component is treated in a different way.

The results of this section and the next one apply directly to solutions of stochastic differential equations such as

dXt=a(t, Xt)dWt+b(t, Xt)(dZt−λtdt),

with Ht =a(t, Xt), Jt =b(t, Xt), t∈ R+, for which the hypotheses can be formulated directly on the coefficientsa(·,·),b(·,·) without explicit knowledge of the solution.

5 Application to Poisson random measures

Since a large family of point processes can be represented as stochastic integrals with respect to Poisson random measures (see e.g. [7], Section 4, Ch. XIV), it is natural to investigate the consequences of Theorem 3.2 in the setting of Poisson random measures. Let σ be a Radon measure onRd, diffuse onRd\ {0}, such thatσ({0}) = 1, and

Z

Rd\{0}

(|x|2∧1)σ(dx)<∞, and consider a random measure ω(dt, dx) of the form

ω(dt, dx) =X

i∈N

δ(ti,xi)(dt, dx)

identified to its (locally finite) support {(ti, xi)}i∈N. We assume that ω(dt, dx) is Poisson dis- tributed with intensity dtσ(dx) on R+ ×Rd\ {0}, and consider a standard Brownian motion (Wt)t∈R+, independent ofω(dt, dx), under a probability P on Ω. Let

Ft=σ(Ws, ω([0, s]×A) : 0≤s≤t, A∈ Bb(Rd\ {0})), t∈R+,

where Bb(Rd\ {0}) = {A ∈ B(Rd\ {0}) : σ(A) < ∞}. The stochastic integral of a square- integrable Ft-predictable process u∈L2(Ω×R+×Rd, dP ×dt×dσ) is written as

Z +∞

0

u(t,0)dWt+ Z

R+×Rd\{0}

u(t, x)(ω(dt, dx)−σ(dx)dt), (5.1) and satisfies the Itˆo isometry

E

Z +∞

0

u(t,0)dWt+ Z +∞

0

Z

Rd\{0}

u(t, x)(ω(dt, dx)−σ(dx)dt)

!2

(15)

= E

Z +∞

0

u2(t,0)dt

+E

"

Z

R+×Rd\{0}

u2(t, x)σ(dx)dt

#

= E

Z

R+×Rd

u2(t, x)σ(dx)dt

. (5.2)

Recall that due to the Itˆo isometry, the predictable and adapted version of u can be used indifferently in the stochastic integral (5.1), cf. p. 199 of [5] for details. When u ∈ L2(R+× Rd, dt×dσ), the characteristic function of

I1(u) :=

Z +∞

0

u(t,0)dWt+ Z

R+×Rd\{0}

u(t, x)(ω(dt, dx)−σ(dx)dt), is given by the L´evy-Khintchine formula

E h

eiI1(u) i

= exp −1 2

Z +∞

0

u2(t,0)dt+ Z

R+×Rd\{0}

(eiu(t,x)−1−iu(t, x))σ(dx)dt

! . Theorem 5.1. Let F with the representation

F =E[F] + Z +∞

0

HsdWs+ Z +∞

0

Z

Rd\{0}

Ju,x(ω(du, dx)−σ(dx)du),

where (Ht)t∈R+ ∈L2(Ω×R+), and (Jt,x)(t,x)∈R+×Rd are Ft-predictable with (Jt,x)(t,x)∈R+×Rd ∈ L1(Ω×R+×Rd\ {0}, dP×dt×dσ) and(Jt,x)(t,x)∈R+×Rd∈L2(Ω×R+×Rd\ {0}, dP×dt×dσ) respectively in(i) and in (ii−iii) below.

i) Assume that 0≤Ju,x≤k, dP σ(dx)du-a.e., for some k >0, and let β12=

Z +∞

0

|Hu|2du

, and α1(x) =

Z +∞

0

Ju,xdu

, σ(dx)−a.e.

Then we have

E[φ(F −E[F])]≤E

"

φ W(β21) +kN˜ Z

Rd\{0}

α1(x) k σ(dx)

!!#

, for all convex functions φ:R→R.

ii) Assume that Ju,x≤k, dP σ(dx)du-a.e., for some k >0, and let β22 =

Z +∞

0

|Hu|2du

, and α22(x) =

Z +∞

0

|Ju,x|2du

, σ(dx)−a.e.

Then we have

E[φ(F −E[F])]≤E

"

φ W(β22) +kN˜ Z

Rd\{0}

α22(x) k2 σ(dx)

!!#

, for all convex functions φ:R→Rsuch that φ0 is convex.

参照

関連したドキュメント

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A

In Section 7, we state and prove various local and global estimates for the second basic problem.. In Section 8, we prove the trace estimate for the second

The Main Theorem is proved with the help of Siu’s lemma in Section 7, in a more general form using plurisubharmonic functions (which also appear in Siu’s work).. In Section 8, we

In this section we prove the lemmas used to obtain Theorem A and The Main Theorem in section 4.. Since all of the results of this section are stated for W(,z)

Indeed, the proof of Theorem 1 presented in section 2 uses an idea of Mitidieri, which relies on the application of a Rellich type identity.. Section 3 is devoted to the proof of

In Section 7, we classify the sets reduced decompositions which can correspond to paths between two flags in a semimodular lattice, and in Section 8, we use our results to derive

Wro ´nski’s construction replaced by phase semantic completion. ASubL3, Crakow 06/11/06

In Section 3, we state and prove the arithmetic results that we obtain for the coefficients of the hypergeometric polynomials.. Section 4 is devoted to the proof of Theorem 2.3, as