El e c t ro nic

Jo urn a l o f

Pr

ob a b i l i t y

Vol. 11 (2006), Paper no. 10, pages 276–300.

Journal URL

http://www.math.washington.edu/~ejpecp/

## Computation of Greeks using Malliavin’s calculus in jump type market models

Marie-Pierre BAVOUZET-MOREL INRIA Rocquencourt

Domaine de Voluceau-Rocquencourt Projet MATHFI, 78150 Le Chesnay

marie-pierre.bavouzet@inria.fr Marouen MESSAOUD IXIS, 47 quai d’Austerlitz

75648 Paris Cedex 13

and INRIA Rocquencourt, Projet MATHFI mmessaoud@ixis-cib.com

Abstract

We use the Malliavin calculus for Poisson processes in order to compute sensitivities for European and Asian options with underlying following a jump type diffusion.

The main point is to settle an integration by parts formula (similar to the one in the Malliavin calculus) for a general multidimensional random variable which has an absolutely continuous law with differentiable density. We give an explicit expression of the differential operators involved in this formula and this permits to simulate them and consequently to run a Monte Carlo algorithm.

Key words: Malliavin calculus, Monte-Carlo algorithm, Euler scheme, compound Poisson process, sensitivity analysis, European options, Asian options.

AMS 2000 Subject Classification: Primary 60H07, 60J75, 65C05.

Submitted to EJP on October 7, 2005. Final version accepted on February 28, 2006.

### 1 Introduction

In the last years, following the pioneering papers [9],[8] a lot of work concerning the numerical applications of the stochastic variational calculus (Malliavin calculus) has been done. This mainly concerns applications in mathematical finance: computations of conditional expectations (which appear in the american option pricing, for example) and of sensitivities (the so called Greeks). The models at hand are usually log-normal type diffusions and then one may use the standard Malliavin calculus. But nowadays people are more and more interested in jump type diffusions (see [5]) and then one has to use the stochastic variational calculus corresponding to Poisson point processes. Such a calculus has already been developped in [2] concerning the noise coming from the amplitudes of the jumps and in [4] concerning the jump times. More recently [3] gives a unified approach using the language of the Dirichlet forms. Another point of view, based on chaos decomposition may be found in [7], [1], [12], [14] and [11].

The aim of our paper is to give a concrete application of the Malliavin calculus approach for sensitivity computations (Greeks) for jump diffusion models. We give two examples: in the first one, we use the Malliavin calculus with respect to the jump amplitudes. In the second one, we add a Brownian part and differentiate with respect to both the jump amplitudes and to the Wiener increments.

The Malliavin integration by parts formula used in this paper is an elementary one. Notice
that any numerical scheme used in a Monte Carlo algorithm appears as a function of a finite
number of random variablesH1, . . . , Hn. It turns out that, if the law of these random variables
is absolutely continuous with respect to the Lebesgue measure and has a smooth density, then
the strategy developed in the Malliavin calculus may be implemented for H_{1}, . . . , H_{n} and an
integration by parts formula is derived. This is not specific for the Brownian motion or for a
Poisson point process but represents an elementary abstract calculus.

The paper is organized as follows. In Section 1 we present the problem: we give the stochastic equation verified by the underling asset which is a one dimensional jump diffusion (one may consider multi-dimensional diffusions as well but we stay in the one dimensional setting just to avoid technical difficulties). Then we precise how an integration by parts formula may be used in the Monte Carlo algorithm to compute sensitivities. In Section 2 we give the integration by parts formula. In Section 3 we use the Malliavin’s calculus with respect to the amplitudes of the jumps in order to compute sensitivities. In section 4, we consider a Merton’s model in which stochastic integrals driven by a Brownian motion and by a compound Poisson process appear.

We have the choice of using the Malliavin’s calculus with respect to the Brownian motion, to the amplitudes of the jumps or to both of them. We perform the three algorithms and we compare the empirical variance of each of them. It turns out that the most performant algorithm (with the smallest variance) is that one based on both the Brownian motion and the amplitudes of the jumps. So the numerical experiments (there is no theorical result) indicate that one has to use the integration by parts formula with respect to all the noise which is available in the model. The numerical results are contained in section 5. We compare the Malliavin method to the finite difference method. For European and Asian call options, the results are rather similar but for digital options, the results given by the Malliavin method are much better than these given by the finite difference method. This can be explained by the larger irregularity of the payoff function.

### 2 The problem

We compute the sensitivities of an option with payoff φ, where the asset (St)t≥0 follows a one-dimensional jump diffusion process driven by a compound Poisson process. Let us precise our problem.

LetN be a compound Poisson process. We denote by (Ti)i∈Nthe jump times oft→Nt, and by
(Jt)t≥0 the corresponding standard Poisson process with parameter λ >0, that is Jtcounts the
number of jumps up to t and for all n≥1, T_{n}−Tn−1 has exponential distribution with mean
λ. For all i∈N^{∗}, we define ∆i=NTi−NTi−1, so ∆i represents the jump amplitude at timeTi.
The random variables (∆i)i∈N^{∗} are independent and identically distributed. We assume that
the law of ∆_{i} is absolutely continuous with respect to the Lebesgue measure and we denote by
ρ its density, that is ∆i∼ρ(a)da, for all i∈N^{∗}.

We deal with two models. In the first one, the assetSt is a pure jump diffusion solution of the SDE

St=β+

Jt

X

i=1

c(Ti,∆i, S_{T}^{−}

i ) + Z t

0

b(u, Su)du . (1)

In the second one, we add a Brownian part :
S_{t}=β+

Jt

X

i=1

c(T_{i},∆_{i}, S_{T}^{−}

i ) + Z t

0

b(u, S_{u})du+
Z t

0

σ(u, S_{u})dW_{u}. (2)

Our aim is to compute ∂_{β}E(φ(S_{T})) (respectively ∂_{β}E

φ(1 T

Z _{T}

0

S_{u}du)

), that is the Delta of an European (respectively Asian) option of payoff φ. In several papers (such as [8], [9], [10]), a Malliavin calculus approach is used. We will follow a similar strategy in our frame, using Malliavin calculus for jump type diffusions. In the case of European option for example, we write

∂

∂βE(φ(ST)) = E

φ^{0}(ST)∂ST

∂β

= E

φ^{0}(S_{T})∂S_{T}

∂β 1_{{J}_{T}_{=0}}

+

∞

X

n=1

E

E

φ^{0}(ST)∂ST

∂β |σ(T_{i}, i∈N)

1_{{J}_{T}_{=n}}

.

On {J_{T} = 0} there is no jump on ]0, T]. Thus, in the case of the first model (1), S_{T} and ^{∂S}_{∂β}^{T}
solve a deterministic integral equation. In the examples that we consider in this paper, the
solutions of these equations are explicit, so that this term is explicitly known. Concerning the
Merton’s model (2), we use the integration by parts formula of the standard Malliavin calculus
(based on the Wiener increments) and we obtain

E

φ^{0}(ST)∂ST

∂β 1_{{J}_{T}_{=0}}

= E φ(ST)H_{0}^{W}(ST, ∂_{β}ST)1_{{J}_{T}_{=0}}

.

Assume now that J_{T} =n 6= 0. Then, using an integration by parts formula on {J_{T} = n}, we
write :

E

φ^{0}(S_{T})∂ST

∂β |σ(T_{i}, i∈N)

1_{{J}_{T}_{=n}}

= E

φ(S_{T})H_{n}(S_{T},∂S_{T}

∂β )|σ(T_{i}, i∈N)

1_{{J}_{T}_{=n}},
where H_{n} is a weight involving “Malliavin derivatives” of S_{T} and of ^{∂S}_{∂β}^{T}. These differential
operators are similar to those in [2], but the frame here is much more simple, since there are
no accumulation of small jumps. So we will derive this integration by parts formula using
elementary arguments. We obtain

∞

X

n=1

E

E

φ^{0}(ST)∂ST

∂β |σ(T_{i}, i∈N)

1_{{J}_{T}_{=n}}

= E

φ(ST)HJT(ST,∂ST

∂β )1{J_{T}≥1}

.
In order to employ this formula in a Monte-Carlo algorithm, we proceed as follows: we simulate
the jump times (T_{n}^{i})n∈N,i= 1, . . . , M and the jump amplitudes (∆^{i}_{n})n∈N,i= 1, . . . , M, and we
compute the correspondingJ_{T}^{i},S_{T}^{i} and H_{J}^{i}i

T. Then we write E

φ(S_{T})H_{J}_{T}(S_{T},∂S_{T}

∂β )1_{{J}_{T}_{≥1}}

' 1

M

M

X

i=1

φ(S_{T}^{i})H_{J}^{i}i
T

1_{{J}i
T≥1}.

In the case of Merton’s model we proceed as follows: we first simulate the jump timesTi, i∈N.
Once they are fixed we construct an Euler scheme in which the times T_{i}, i ∈ N are included
in the discretization grid. Then we simulate the amplitudes of the jumps and the Brownian
increments and we thus perform the Monte-Carlo algorithm.

### 3 Malliavin calculus for simple functionals

On a probability space (Ω,F, P) we consider a sequence of independent random variables
(Vn, n∈N^{∗}). We suppose that for all n≥1,Vn has moments of any order.

Hypothesis 3.1 For every n ∈ N^{∗}, the law of Vn is absolutely continuous with respect to the
Lebesgue measure and has the densityp_{n} which is continuously differentiable onRand such that
for all k ∈ N, lim

y→±∞|y|^{k}p_{n}(y) = 0. We also assume that ∂_{y}ln(p_{n}(y)) = ∂_{y}p_{n}(y)

p_{n}(y) has at most
polynomial growth.

We introduce some notations. For k ≥ 1, we denote by C_{↑}^{k}(R^{n}) the space of the functions
f : R^{n}→Rwhich are k times differentiable and such that f and its derivatives up to order k have
at most polynomial growth. For a multi-index α= (α1, . . . , αk) we denote∂_{α}^{k}f = ∂^{k}

∂α1. . . ∂α_{k}

f.

In the case k = 0, C_{↑}^{0}(R^{n}) denotes the set of the functions f : R^{n} → R which are continuous
with respect to each argument and have at most polynomial growth.

We define now the simple functionals and the simple processes.

A random variable F is called a simple functional if there exists some n ∈ N^{∗} and some
measurable functionf : R^{n}→R such that

F =f(V_{1}, . . . , V_{n}).

We denote byS_{(n,k)} the space of the simple functionals such thatf ∈ C_{↑}^{k}(R^{n}).

A simple process of lengthnis a sequence of random variables
U = (U_{i})i≤n such that

Ui =ui(V1(ω), . . . , Vn(ω)).

We denote by P_{(n,k)} the space of the simple processes of length n such that ui ∈ C_{↑}^{k}(R^{n}), i=
1, . . . , n. Note that if U ∈P_{(n,k)} thenUi ∈S_{(n,k)},i∈N^{∗}.

We define now the differential operators which appear in the Malliavin’s calculus.

The Malliavin derivative D:S_{(n,1)}→P_{(n,0)}:
ifF =f(V_{1}, . . . , V_{n}), then

DiF := ∂if(V1(ω), . . . , Vn(ω)) = ∂f

∂x_{i}

(V1(ω), . . . , Vn(ω)),
DF := (DiF)i≤n∈P_{(n,0)}.

The Skorohod integralδ :P_{(n,1)}→S_{(n,0)}:
δ(U) := −

n

X

i=1

(DiUi+θi(Vi)Ui)

= −

n

X

i=1

∂ui

∂x_{i}(V_{1}, . . . , V_{n}) +θ_{i}(V_{i})u_{i}(V_{1}, . . . , V_{n})

, with

θ_{i}(y) := ∂_{y}ln(p_{i})(y) = (^{p}_{p}^{0}^{i}

i)(y) ifp_{i}(y)>0,

:= 0 ifp_{i}(y) = 0.

Remark 3.1 For everyp∈N and for every F ∈S_{(n,1)} and U ∈P_{(n,1)}, one has

E

n

X

j=1

|D_{j}F|^{2}

p

<∞ and E (|δ(U)|^{p})<∞.

This comes from the following hypothesis: E|V_{i}|^{p} <∞, for all i and f, ui, and ∂ln(pi), i =
1, . . . , n, have at most polynomial growth.

The operatorδ is the adjoint ofD (this is the reason of being ofθ_{i}):

Proposition 3.1 Let F ∈S_{(n,1)} and U ∈P_{(n,1)} then

E(< DF, U >) =E(F δ(U)), (3)

where < ., . > is the scalar product inR^{n}.
Proof. One writes

E(< DF, U >) =E

n

X

i=1

DiF Ui

!

=

n

X

i=1

E((u_{i}∂_{i}f) (V_{1}, . . . , V_{n}))

=

n

X

i=1

Z

R^{n}

(ui∂if) (a1, . . . , an)p1(a1). . . pn(an)da1. . . dan

=−

n

X

i=1

Z

R^{n}

f(a_{1}, . . . , a_{n})

∂_{i}u_{i}+u_{i} p^{0}_{i}
pi

p_{i}(a_{i}) Y

j6=i

p_{j}(a_{j})da_{1}. . . da_{n},

the last equality being obtained by integration by parts, where we have used that for allk∈N,

y→±∞lim |y|^{k}pi(y) = 0, p^{0}_{i}

p_{i} ∈ C_{↑}^{0}(R^{n}) and f, ui ∈ C_{↑}^{1}(R^{n}). Coming back to expectations we obtain
(3).

We introduce now the Ornstein Uhlenbeck operator L=δ(D) :S_{(n,2)} →S_{(n,0)}:
L_{i}F := D_{i}D_{i}F+θ_{i}D_{i}F , fori= 1, . . . , n

LF := −

n

X

i=1

L_{i}F

= −

n

X

i=1

(∂i(∂if)(V1, . . . , Vn) +θi(Vi) (∂if)(V1, . . . , Vn)). (4)

Remark 3.2 Sinceθi, f ∈ C_{↑}^{2}(R^{n}), one has E|LF|^{p} <∞ for all p.

Suppose that F, G∈S_{(n,2)} then, as an immediate consequence of the duality relation (3) one
obtains

E(F LG) =E(< DF, DG >) =E(G LF). (5)

Moreover, the standard differential calculus rules give the following chain rule.

Lemma 3.1 (i) Letφ∈ C^{1}_{b}(R^{d}) and let F = (F^{1}, . . . , F^{d}), F^{i} ∈S_{(n,1)}.Then φ(F)∈S_{(n,1)} and
Dφ(F) =

d

X

k=1

∂kφ(F)DF^{k}, (6)

(ii) LetF, G∈S_{(n,2)} then

L(F G) =F LG+G LF −2 < DF, DG > . (7)

Finally, givenF = (F^{1}, . . . , F^{d}), F^{i}∈S_{(n,1)}, we introduce the Malliavin covariance matrix
σ^{ij}_{F} =< DF^{i}, DF^{j} >=

n

X

p=1

∂_{p}f^{i}∂_{p}f^{j}

(V_{1}, . . . , V_{n}),
whereF^{i} =f^{i}(V_{1}, . . . , V_{n}).

We are now able to state the integration by parts formula.

Theorem 3.1 Let F = (F^{1}, . . . , F^{d}) ∈ S_{(n,2)}^{d} , G ∈ S_{(n,1)}. We assume that the matrix σF is
invertible and denoteγF :=σ_{F}^{−1}.

We also suppose that

E (detγ_{F})^{4}

<∞. (8)

Then for every smooth function φ:R^{d}→R, every i= 1, . . . , d

E(∂iφ(F)G) =E(φ(F)H^{i}(F, G)), (9)

withH^{i}(F, G)∈L^{1} and
H^{i}(F, G) =

d

X

j=1

G γ_{F}^{ji}LF^{j}−γ_{F}^{ji} < DF^{j}, DG >−G < DF^{j}, Dγ_{F}^{ji}> . (10)
Proof. Using the chain rule we obtain for each j= 1, . . . , d

< Dφ(F), DF^{j} > =

n

X

r=1

< D_{r}φ(F), D_{r}F^{j} >

=

n

X

r=1 d

X

i=1

∂iφ(F) < DrF^{i}, DrF^{j} >

=

d

X

i=1

∂_{i}φ(F)σ^{ij}_{F} ,

so that∂_{i}φ(F) =

d

X

j=1

< Dφ(F), DF^{j} > γ_{F}^{ji}. Moreover, using (7)

< Dφ(F), DF^{j} >= 1

2 −L(φ(F)F^{j}) +φ(F)LF^{j}+F^{j}L(φ(F))
.
Then the duality relation gives

E(∂_{i}φ(F)G)

=1 2E

G

d

X

j=1

−L(φ(F)F^{j}) +φ(F)LF^{j}+F^{j}L(φ(F))

γ_{F}^{ji}

=

d

X

j=1

1 2Eh

φ(F)

−F^{j}L(γ_{F}^{ji}G) +G γ_{F}^{ji}LF^{j}+L(G F^{j}γ_{F}^{ji})i

=

d

X

j=1

Eh φ(F)

G γ_{F}^{ji}LF^{j}−< DF^{j}, D(G γ_{F}^{ji})>i
.

By hypothesis (8), the above expectations are finite, so the proof is complete.

### 4 Integration by parts with respect to the jump amplitudes

In this section we use the integration by parts formula (9) with respect to ∆i∼ρ(a)da,i∈N^{∗},
which are the amplitudes of the jumps of (N_{t})t≥0. Let (S_{t})t≥0 be solution of the SDE

S_{t}=β+

Jt

X

i=1

c(T_{i},∆_{i}, S_{T}_{i}−) +
Z t

0

b(r, S_{r})dr . (11)

We work under the following hypothesis.

Hypothesis 4.1 ρ is continuously differentiable, ρ^{0}

ρ has at most polynomial growth on R and for allk∈N, lim

y→±∞|y|^{k}ρ(y) = 0.

Hypothesis 4.2 The functions (a, x) → c(t, a, x) and x → b(t, x) are twice continuously differentiable and there exists a constant K >0 and α∈N such that:

i)|c(t, a, x)| ≤K(1 +|a|)^{α}(1 +|x|)

|b(t, x)| ≤K(1 +|x|)
ii)|∂_{x}c(t, a, x)|+

∂_{x}^{2}c(t, a, x)

+|∂_{a}c(t, a, x)| ≤K(1 +|a|)^{α}

|∂_{x}b(t, x)|+

∂_{x}^{2}b(t, x)
≤K
Hypothesis 4.3 There existsη >0 such that

|∂_{a}c(t, a, x)| ≥η >0.
4.1 The deterministic equation

We introduce the following deterministic equation.

We fix some 0 < u1 < . . . < un < T, and we denote u = (u1, . . . , un). We also fix a =
(a_{1}, . . . , a_{n})∈R^{n}. To u and awe associate the equation

s_{t}=β+

Jt(u)

X

i=1

c(u_{i}, a_{i}, s_{u}^{−}

i ) +
Z _{t}

0

b(r, s_{r})dr,0≤t≤T , (12)
whereJt(u) =k ifu_{k} ≤t < u_{k+1}.

We use this deterministic equation in order to express S_{t} as a simple functional: on {J_{t} =n}

one has

St = st(T1, . . . , Tn,∆1, . . . ,∆n),

∂_{∆}_{i}S_{t} = ∂_{a}_{i}s_{t}(T_{1}, . . . , T_{n},∆_{1}, . . . ,∆_{n}),

∂_{∆}^{2}_{j}_{,∆}_{i}S_{t} = ∂_{a}^{2}_{j}_{,a}_{i}s_{t}(T_{1}, . . . , T_{n},∆_{1}, . . . ,∆_{n}),

∂_{β}S_{t} = ∂_{β}s_{t}(T_{1}, . . . , T_{n},∆_{1}, . . . ,∆_{n}).

So in order to compute the derivatives ofS_{t}, we have to compute those ofs_{t}. Under hypothesis
4.2, we may perform these computations by differentiating with respect to aj, j = 1, . . . , n in
(12). We obtain

• Fori≤Jt(u), ∂aistsatisfies the equation

∂aist=∂ac(ui, ai, s_{u}^{−}

i ) +

Jt(u)

X

j=i+1

∂xc(uj, aj, s_{u}^{−}

j)∂ais_{u}^{−}

j

+ Z t

ui

∂_{x}b(r, s_{r})∂_{a}_{i}s_{r}dr . (13)

•

∂_{a}^{2}_{i}st=∂_{a}^{2}c(ui, ai, s_{u}^{−}

i ) +

Jt(u)

X

j=i+1

∂^{2}_{x}c(uj, aj, s_{u}^{−}

j)

∂ais_{u}^{−}

j

2

+

Jt(u)

X

j=i+1

∂_{x}c(u_{j}, a_{j}, s_{u}^{−}

j)∂_{a}^{2}_{i}s_{u}^{−}

j +

Z _{t}

ui

∂_{x}b(r, s_{r})∂_{a}^{2}_{i}s_{r}dr
+

Z t ui

∂_{x}^{2}b(r, s_{r}) (∂_{a}_{i}s_{r})^{2} dr . (14)

• Fori < j

∂_{a}^{2}_{j}_{,a}_{i}s_{t}=∂_{a,x}^{2} c(u_{j}, a_{j}, s_{u}^{−}

j) +

Jt(u)

X

k=j+1

∂_{x}^{2}c(u_{k}, a_{k}, s_{u}^{−}

k)∂_{a}_{i}s_{u}^{−}

k ∂_{a}_{j}s_{u}^{−}

k

+

Jt(u)

X

k=j+1

∂xc(uk, ak, s_{u}^{−}

k)∂_{a}^{2}_{j}_{,a}_{i}s_{u}^{−}

k +

Z t uj

∂xb(r, sr)∂_{a}^{2}_{j}_{,a}_{i}srdr
+

Z t
u_{k}

∂_{x}^{2}b(r, s_{r})∂_{a}_{i}s_{r}∂_{a}_{j}s_{r}dr , (15)
and for i > j, we obtain ∂_{a}^{2}_{j}_{,a}_{i}s_{t} by symmetry.

• ∂βstis computed by

∂_{β}s_{t}= 1 +

Jt(u)

X

i=1

∂_{x}c(u_{i}, a_{i}, s_{u}^{−}

i )∂_{β}s_{u}^{−}

i +

Z t 0

∂_{x}b(r, s_{r})∂_{β}s_{r}dr . (16)
Lemma 4.1 (i) The function (a1, . . . , an) → st(u1, . . . , un, a1, . . . , an) ∈ C_{↑}^{2}(R^{n}), and
(a_{1}, . . . , a_{n}) → ∂_{β}s_{t}(u_{1}, . . . , u_{n}, a_{1}, . . . , a_{n}) ∈ C_{↑}^{1}(R^{n}).

(ii) On {J_{T} =n},|∂_{a}_{n}s_{T}|^{2} ≥η^{2}e^{−2}^{T K} >0.

Proof. (i) Proceed by recurrence using hypothesis4.2 in (12), (13), (14), (15), and in (16).

(ii) Using (13),∂ansT =∂ac(tn, an, s_{t}^{−}

n) + Z T

tn

∂xb(r, sr)∂ansrdr, so

|∂_{a}_{n}s_{T}|=

∂_{a}c(t_{n}, a_{n}, s_{t}^{−}

n) exp

Z T tn

∂_{x}b(r, s_{r})dr

≥η e^{−T K}.

4.2 Integration by parts formula

We come back to the problem developed in section 1 and we deal with equation (9).

In the case of European options, we write E

φ^{0}(ST)∂S_{T}

∂β 1{J_{T}=n}

=E

E

φ^{0}(ST)∂S_{T}

∂β |σ(T_{i}, i∈N)

1{J_{T}=n}

=E h_{n}(T_{1}, . . . , T_{n})1_{{J}_{T}_{=n}}

, with

hn(t1, . . . , tn)

=E

φ^{0}(s_{T}(t_{1}, . . . , t_{n},∆_{1}, . . . ,∆_{n}))∂s_{T}

∂β (t_{1}, . . . , t_{n},∆_{1}, . . . ,∆_{n})

. We fixn≥1 andt1, . . . , tnand we apply Theorem 3.1 with

F =sT(t1, . . . , tn,∆1, . . . ,∆n) andG=∂_{β}sT(t1, . . . , tn,∆1, . . . ,∆n).

By Lemma 4.1, we know that F ∈ S_{(n,2)} and G ∈ S_{(n,1)}. So we have to verify that σ_{F} is
invertible and that (8) holds true, and then we compute

Hn(F, G) =G γFLF −γFhDF, DGi −GhDF, Dγ_{F}i. (17)
We have

σ_{F} =

n

X

i=1

|∂_{a}_{i}s_{T}(t_{1}, . . . , t_{n},∆_{1}, . . . ,∆_{n})|^{2}

≥ |∂_{a}_{n}sT(t1, . . . , tn,∆1, . . . ,∆n)|^{2} ≥η^{2}e^{−2T K},

So we can use Theorem 3.1 and we obtain the integration by parts formula (9). For the computation of the terms involved in the right hand side of (17),

• we use (13) forDF and γ_{F}, (16) forG, (13) and (14) forLF, (13), (14) and (15) forDγ_{F},

• we differentiate with respect toa_{j} in (16) to obtain DG.

In the case of Asian options, the numerical example treated in this paper (see section6.3) allows
us to write for each fixedn∈N^{∗}, on{J_{T} =n},

IT := 1 T

Z T 0

Sudu=iT(T1, . . . , Tn,∆1, . . . ,∆n), where

(a_{1}, . . . , a_{n}) →i_{T}(u_{1}, . . . , u_{n}, a_{1}, . . . , a_{n}) ∈ C^{2}(R^{n}) and |∂_{a}_{n}i_{T}|^{2} ≥δ_{n}>0. Then we can proceed
exactly as before and obtain (17) with

F =i_{T}(t_{1}, . . . , t_{n},∆_{1}, . . . ,∆_{n}) andG=∂_{β}i_{T}(t_{1}, . . . , t_{n},∆_{1}, . . . ,∆_{n}).

### 5 Merton Process and Euler scheme

In this section we deal with the Merton model:

St=β+

Jt

X

i=1

c(Ti,∆i, S_{T}^{−}

i ) + Z t

0

b(u, Su)du+ Z t

0

σ(u, Su)dWu,

whereρ, b and c satisfy the previous hypothesis4.1,4.2 and4.3. Moreover, we assume

Hypothesis 5.1 The function x→σ(u, x) is twice continuously differentiable and there exists two constantsC >0 and >0 such that:

i)|σ(u, x)| ≤C(1 +|x|),
ii)|∂_{x}σ(u, x)|+

∂_{x}^{2}σ(u, x)
≤C ,
iii)|σ(u, x)| ≥ .

We will present two alternative calculus for this model. The first one is based on the Brownian motion only and the second one is based on both the Brownian motion and the jump amplitudes.

Suppose that the jump timesT_{1}< . . . < T_{n}are given (this means that we have already simulated
T1, . . . , Tn in the Monte-Carlo algorithm). We include them in the discretization grid: so we
consider a time grid 0 = t_{0} < t_{1} < . . . < t_{m} < . . . < t_{M} = T and we assume that T_{i} = t_{m}_{i},
i= 1, . . . , n for some m_{1} < . . . < m_{n}. Fort >0, we denote m(t) = m if t_{m} ≤t < t_{m+1}. Then
the corresponding Euler scheme is given by

Sˆ_{t}=β+

Jt

X

i=1

c(T_{i},∆_{i},Sˆ_{T}

i−) +

m(t)−1

X

k=0

σ(t_{k},Sˆ_{t}_{k}) (W_{t}_{k+1}−W_{t}_{k})

+

m(t)−1

X

k=0

b(t_{k},Sˆ_{t}_{k})(t_{k+1}−t_{k}).
This corresponds to the deterministic equation :

ˆ

s_{t}=β+

Jt

X

i=1

c(u_{i}, a_{i},sˆ_{u}^{−}

i ) +

m(t)−1

X

k=0

σ(t_{k},ˆs_{t}_{k}) ∆_{k}w+

m(t)−1

X

k=0

b(t_{k},sˆ_{t}_{k}) (t_{k+1}−t_{k}), (18)
where we have denoted by ∆_{k}w=w_{t}_{k+1}−w_{t}_{k}. So on {J_{t}=k}, one has

Sˆt= ˆst(T1, . . . , T_{k},∆1, . . . ,∆_{k},∆0W, . . . ,∆_{m(t)−1}W),
where ∆_{k}W =Wt_{k+1}−Wt_{k}. Moreover we have :

∂_{∆}_{i}Sˆ_{t} = ∂_{a}_{i}sˆ_{t}(T_{1}, . . . , T_{k},∆_{1}, . . . ,∆_{k},∆_{0}W, . . . ,∆_{m(t)−1}W),

∂_{∆}^{2}_{j}_{,∆}_{i}Sˆ_{t} = ∂_{a}^{2}_{j}_{,a}_{i}sˆ_{t}(T_{1}, . . . , T_{k},∆_{1}, . . . ,∆_{k},∆_{0}W, . . . ,∆_{m(t)−1}W),

∂_{β}Sˆ_{t} = ∂_{β}sˆ_{t}(T_{1}, . . . , T_{k},∆_{1}, . . . ,∆_{k},∆_{0}W, . . . ,∆_{m(t)−1}W),

∂_{∆}_{k}_{W}Sˆ_{t} = ∂_{∆}_{k}_{w}ˆs_{t}(T_{1}, . . . , T_{k},∆_{1}, . . . ,∆_{k},∆_{0}W, . . . ,∆_{m(t)−1}W).

The first derivatives of ˆs_{t} satisfy the following equations :

∂_{a}_{i}ˆs_{t}=∂_{a}c(u_{i}, a_{i},sˆ_{u}^{−}

i ) +

Jt

X

k=i+1

∂_{x}c(u_{k}, a_{k},sˆ_{u}^{−}

k)∂_{a}_{i}ˆs_{u}^{−}

k

+

m(t)−1

X

k=i

∂xb(t_{k},sˆt_{k})∂aisˆt_{k}δ_{k}+

m(t)−1

X

k=i

∂xσ(t_{k},sˆt_{k})∂aisˆt_{k}∆_{k}w , (19)

∂∆iwsˆt=σ(ti,sˆti) +

Jt

X

k=1

∂xc(uk, ak,ˆs_{u}^{−}

k)∂∆iwsˆ_{u}^{−}

k

+

m(t)−1

X

k=i

∂xσ(t_{k},sˆt_{k})∂∆iwsˆt_{k}∆_{k}W +

m(t)−1

X

k=i

∂xb(t_{k},sˆt_{k})∂∆iwsˆt_{k}δ_{k}, (20)
whereδ_{k} =t_{k+1}−t_{k}.

For the derivatives of higher order one may derive similar equations. Now we have the choice
of using the integration by parts formula from the section 2, using ∆iW or both ∆i and ∆iW.
And in each case we have different forms for the differential operators. For example on the set
{J_{t}=k}

σ^{J,W}_{t} :=

k

X

i=1

|∂_{a}_{i}sˆt|^{2}+

m(t)−1

X

i=0

|∂_{∆}_{i}_{w}sˆt|^{2} , (21)

σ^{J}_{t} :=

k

X

i=1

|∂_{a}_{i}sˆ_{t}|^{2} , (22)

σ_{t}^{W} :=

m(t)−1

X

i=0

|∂_{∆}_{i}_{w}ˆs_{t}|^{2} . (23)
(24)
The other differential operators will change in a similar way. Note that ∆_{i}W ∼ N(0, t_{i+1}−t_{i})
so that the corresponding L^{W}_{i} is given by

L^{W}_{i} sˆ_{t}=∂_{∆}^{2}_{i}_{w}sˆ_{t}+θ_{i}^{W}∂_{∆}_{i}_{w}sˆ_{t}, withθ^{W}_{i} =− ∆_{i}W
ti+1−ti

. Then the Ornstein-Uhlenbeck operator will be

L^{W}Sˆ_{t} = −

m(t)−1

X

i=0

L^{W}_{i} sˆ_{t},

L^{J}Sˆt = −

Jt

X

i=1

∂_{∆}^{2}_{i}sˆt+θ^{J}_{i} ∂_{∆}_{i}sˆt,
L^{J,W}Sˆt = L^{J}Sˆt+L^{W}Sˆt.

Notice that if m=m(t)

σ_{t}^{J,W} ≥σ^{W}_{t} ≥

∂_{∆}_{m−1}_{W}sˆ_{t}

2=

σ(tm−1,sˆ_{t}_{m−1})

2 ≥^{2} >0.

This allows us to use the integration by parts formula corresponding to the Brownian motion only, or to both Brownian motion and jump amplitudes (the first case leads to the same calculus as in [6] and [13]). It is more delicate to prove the non degeneracy condition (8) if we only use the jumps.

### 6 Numerical results

We compute here the Delta of two European and Asian options (call option with payoffφ(x) = (x−K)+and digital option with payoffφ(x) =1x≥K) with underlying (St)t≥0 following a jump type diffusion model.

LetN be a compound Poisson process with jump intensityλ. For all i∈N^{∗}, we denote ∆_{i} the
jump amplitude ofN at the jump time Ti. We suppose that ∆i ∼ N(0,1),i≥1.

We deal with two different jump diffusion models. The first one is motivated by Vasicek’s model for interest rates (but we consider a jump process instead of a Brownian motion):

St=S0− Z t

0

r(Su−α)du+σ

Jt

X

i=1

∆i. (25)

The second one is of Black-Scholes type:

St=S0+ Z t

0

r Sudu+σ

Jt

X

i=1

S_{T}^{−}

i ∆i, (26)

We compare the different Monte Carlo estimators of the Delta, obtained by using

• Finite difference method

• Malliavin calculus

• localized Malliavin calculus.

Remark 6.1 Values of the parameters.

The numerical results are given in figures 3, 5, 4, 6, 7 and 8. We denote by σV and σBS the
parameters corresponding to the Vasicek model (25) and the geometric model (26) respectively.We
choose ‘large’ values forσ_{V} (σ_{V} = 20in the numerical results) in order to fit the values usually
used by the practiciens in the Vasicek model. On the contrary, we take weak values for σBS

(σ_{BS} = 0.2 in the numerical results) in order to fit the usual values of the volatility in the
Black-Scholes model. And all others parameters (S_{0}, K, r, λ . . .) are the same for both models.

6.1 Localization method

For European and Asian call options, we use the same variance reduction method as the one
introduced in [9]. For European options, sensitivity analysis using Malliavin calculus leads to
terms such as φ(S_{T}), H_{n}(S_{T},∂S_{T}

∂β ) (take I_{T} for S_{T} in the case of Asian options), which may
have a large variance. It is possible to avoid this problem by using a localizing function which
vanishes out of an interval [K−δ , K+δ], for someδ >0. Let us introduce some notations.

B_{δ}(s) := 0 ifs≤K−δ

:= ^{s−(K−δ)}_{2}_{δ} ifs∈[K−δ , K+δ]

:= 1 ifs≥K+δ , and

G_{δ}(t) := Rt

−∞B_{δ}(s)ds

:= 0 ift≤K−δ

:= ^{(t−(K−δ))}

2

4δ ift∈[K−δ , K+δ]

:= t−K ift≥K+δ .
Note that B_{δ} is the derivative of G_{δ}. We define

F_{δ}(t) := (t−K)_{+}−G_{δ}(t)

:= 0 ift≤K−δ

:= −^{(t−(K−δ))}_{4}_{δ} ^{2} ift∈[K−δ , K]

:= t−K− ^{(t−(K−δ))}_{4}_{δ} ^{2} ift∈[K, K+δ]

:= 0 ift≥K+δ

Since F_{δ}(S_{T}) +G_{δ}(S_{T}) = (S_{T} −K)_{+}, we have on {J_{T} =n},

∂βE [(ST −K)+] = ∂

∂βE [Gδ(ST)] + ∂

∂βE [Fδ(ST)]

= E [B_{δ}(S_{T})∂_{β}S_{T}] + E [F_{δ}(S_{T})H_{n}(S_{T}, ∂_{β}S_{T})] .

SinceF_{δ} vanishes out of [K−δ, K+δ], the value of the second expectation does not blow up as
H_{n} increases.

6.2 Finite Difference method

Arbitrage theory gives an expression for the price u(,) of an option, with underlying S and payoffφ, as the following expected value :

u(0, S_{0}) = E [φ(S_{T})|S_{0}].

To compute the Delta, the finite difference method makes a differentiation using the Taylor
expansion of the price with respect toS_{0}. Indeed, we shiftS_{0} withand compute the new price
u(0, S0+), then the first term of the Taylor expansion of the price around S0 is given by:

∂u(0, S0)

∂S_{0} ' u(0, S0+)−u(0, S0−)

2 .

We choose the symmetric estimator and we use the same simulated paths in the two ”shifted expectation” in order to reduce the variance.

6.3 Malliavin estimator

We recall that we have computed the Delta estimator using Malliavin integration by parts formula (see section 2):

E

φ^{0}(ST)∂ST

∂β

' 1 M

M

X

i=1

φ(S_{T}^{i})H_{J}^{i}i

T(S_{T}^{i}, ∂βS_{T}^{i}) 1_{{J}^{i}

T≥1}+φ^{0}(S_{T}^{i})∂βS_{T}^{i} 1_{{J}^{i}

T=0}.
We compute here the Malliavin weightsH_{J}^{i}_{T}(S_{T}^{i}, ∂_{β}S_{T}^{i}).

• We first study the diffusion process defined by (25). We have an explicit expression of S_{T} on
{J_{T} =n}:

ST =S0e^{−r T} +α(1−e^{−r T}) +σ

n

X

j=1

∆je^{−r}^{(T}^{−T}^{j}^{)}. (27)
In order to compute the Malliavin derivatives ofST involved in the weightHJT, we differentiate
with respect to the jump amplitudes in (27). Thus, for all 1≤i≤n, one has

DiST = σe^{−r}^{(T}^{−T}^{i}^{)}
D_{ii}^{2}ST = 0

Y_{T} := ∂S_{T}

∂S_{0} =e^{−r T}
DiYT = 0,

and the covariance matrix is defined by :
σ_{T} =

n

X

j=1

|D_{j}S_{T}|^{2}=σ^{2}

n

X

j=1

e^{−2}^{r}^{(T}^{−T}^{j}^{)}.

Then γ_{T} = 1

σ_{T} ⇒D_{i}γ_{T} = 0, for all 1≤i≤n.

Since ∂lnρ(∆)

∂∆ =−∆, one has
LS_{T} =−

n

X

j=1

D_{j}S_{T} ∂lnρ(∆j)

∂∆_{j} =

n

X

j=1

σ e^{−r}^{(T}^{−T}^{j}^{)}∆_{j}.
Finally, using equation (10), on{J_{T} =n}

H_{n}(S_{T}, Y_{T}) =

n

P

j=1

e^{r T}^{j}∆_{j}
σ

n

P

j=1

e^{2}^{r T}^{j}

. (28)

• We study the jump diffusion process defined by (26).

On{J_{T} =n}, we have

ST =S0e^{r T}

n

Y

j=1

(1 +σ∆j), so, for all 1≤i≤n,

DiST = σ ST

1 +σ∆i

=σ

n

Y

j=1, j6=i

(1 +σ∆j).

Notice that if (1 +σ∆i) = 0, thenST = 0. So we employ the convention 0

0 = 0. This is just a matter of notation. Let us define

Aσ =

n

X

j=1

1

(1 +σ∆j)^{2} (29)

Bσ =

n

X

j=1

∆_{j}

(1 +σ∆j) (30)

C_{σ} =

n

X

j=1

1

(1 +σ∆j)^{4}, (31)

then we get, for all 1≤i≤n

D^{2}_{ii}S_{T} = 0
Y_{T} = S_{T}

S0

D_{i}Y_{T} = σ S_{T}
S_{0}(1 +σ∆_{i})
σT =σ^{2}S_{T}^{2}

n

X

j=1

1

(1 +σ∆_{j})^{2} =σ^{2}S^{2}_{T}Aσ (32)
D_{i}σ_{T} = ( 2σ^{3}S_{T}^{2}

1 +σ∆_{i})

A_{σ}− 1

(1 +σ∆_{i})^{2}

D_{i}γ_{T} =−Diσ_{T}
σ_{T}^{2} .
Hence, for European options

H_{n}^{E}(S_{T}, Y_{T}) = B_{σ}
σ S0Aσ

+ 1 S0

− 2C_{σ}

S0A^{2}_{σ} . (33)

For Asian options, on {J_{T} =n}we have
IT := 1

T Z T

0

Sudu=

n

X

j=0

1 T

Z Tj+1

Tj

Sudu ,

with the conventionT_{0}= 0 and T_{n+1} =T. For all t∈[T_{j}, T_{j+1}[,
St=STj +

Z t Tj

r Sudu, soSt=STje^{r}^{(t−T}^{j}^{)} and we obtain

IT = 1 r T

n

X

j=0

STj

e^{r}^{(T}^{j+1}^{−T}^{j}^{)}−1

.

Then I_{T} =i_{T} (T1, . . . , Tn,∆1, . . . ,∆n), where for allt∈[0, T], on{J_{t}=m},
it(u1, . . . , um, a1, . . . , am) = 1

r t

m

X

j=0

st(u1, . . . , uj, a1, . . . , aj)

e^{r}^{(u}^{j+1}^{−u}^{j}^{)}−1

. So for all 1≤i≤n, we have

DiIT = σ

T Ki,T, whereKi,T := 1 1 +σ∆i

Z T Ti

Sudu . Then we get

D^{2}_{ii}IT = 1
r T

n

X

j=0

D^{2}_{ii}STj

e^{r}^{(T}^{j+1}^{−T}^{j}^{)}−1

= 0
Z_{T} := ∂I_{T}

∂S_{0} = 1
T

Z T 0

Y_{u}du= I_{T}
S_{0}
D_{i}Z_{T} = σ

T S_{0}K_{i,T}
σ_{I}_{T} =

n

X

j=1

|D_{j}I_{T}|^{2} = σ^{2}
T^{2}

n

X

j=1

K_{j,T}^{2} (34)

DiγIT = −2γ_{I}^{2}_{T}

n

X

j=1

DjITD^{2}_{ij}IT,
with

D_{ij}^{2}I_{T} =

0 if i=j

σ^{2}

T(1+σ∆j)K_{i,T} if i > j

D^{2}_{ji}IT if i < j (by symmetry).
Hence, for Asian options

H_{n}^{A}(I_{T}, Z_{T}) =− 1

S_{0} + K0,T

σ S0K

n

X

j=1

∆_{j}K_{j,T} +4σ
K

n

X

i,j=1 i6=j

K_{j,T}^{2} Ki,T

1 +σ∆_{i}

, (35)

whereK =

n

X

j=1

K_{j,T}^{2} and Kj,T = 1
1 +σ∆j

Z T Tj

Sudu.

After simulating trajectories of the specified diffusion, we can compute the Malliavin weights by (28), (33) and (35). Note that we only need to know the amplitudes and the times of the jumps to state a Monte-Carlo estimator of the Delta.