• 検索結果がありません。

Sample-Path Large Deviations in Credit Risk

N/A
N/A
Protected

Academic year: 2022

シェア "Sample-Path Large Deviations in Credit Risk"

Copied!
29
0
0

読み込み中.... (全文を見る)

全文

(1)

Volume 2011, Article ID 354171,28pages doi:10.1155/2011/354171

Research Article

Sample-Path Large Deviations in Credit Risk

V. J. G. Leijdekker,

1, 2

M. R. H. Mandjes,

1, 3, 4

and P. J. C. Spreij

1

1Korteweg-de Vries Institute for Mathematics, University of Amsterdam, P.O. Box 94248, 1090 GE Amsterdam, The Netherlands

2ABN AMRO, HQ2057, Gustav Mahlerlaan 10, 1082 PP Amsterdam, The Netherlands

3EURANDOM, P.O. Box 513, 5600 MB Eindhoven, The Netherlands

4CWI, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands

Correspondence should be addressed to P. J. C. Spreij,spreij@uva.nl Received 5 July 2011; Accepted 20 September 2011

Academic Editor: Ying U. Hu

Copyrightq2011 V. J. G. Leijdekker et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

The event of large losses plays an important role in credit risk. As these large losses are typically rare, and portfolios usually consist of a large number of positions, large deviation theory is the natural tool to analyze the tail asymptotics of the probabilities involved. We first derive a sample-path large deviation principleLDP for the portfolio’s loss process, which enables the computation of the logarithmic decay rate of the probabilities of interest. In addition, we derive exact asymptotic results for a number of specific rare-event probabilities, such as the probability of the loss process exceeding some given function.

1. Introduction

For financial institutions, such as banks and insurance companies, it is of crucial importance to accurately assess the risk of their portfolios. These portfolios typically consist of a large number of obligors, such as mortgages, loans, or insurance policies, and therefore it is computationally infeasible to treat each individual object in the portfolio separately. As a result, attention has shifted to measures that characterize the risk of the portfolio as a whole, see, for example, 1 for general principles concerning managing credit risk. The best-known metric is the so-called value at risk, see2, which is measuring the minimum amount of money that can be lost withαpercent certainty over some given period. Several other measures have been proposed, such as economic capital, the risk-adjusted return on capitalRAROC, or expected shortfall, which is a coherent risk measure3. Each of these measurements is applicable to market risk as well as credit risk. Measures such as loss-given

(2)

defaultLGDand exposure at defaultEADare measures that purely apply to credit risk.

These and other measures are discussed in detail in, for example,4.

The currently existing methods mainly focus on the distribution of the portfolio loss up to a given point in timee.g., one year into the future. It can be argued, however, that in many situations it makes more sense to use probabilities that involve thecumulativeloss process, say{Lt :t≥ 0}. Highly relevant, for instance, is the event thatever exceeds a given functionζ· within a certain time window, e.g., between now and one year ahead, that is, an event of the type

{∃t≤T:Ltζt}. 1.1

It is clear that measures of the latter type are intrinsically harder to analyze, as it does not suffice anymore to have knowledge of the marginal distribution of the loss process at a given point in time, for instance, the event1.1actually corresponds to the union of events{Lt≥ ζt}, fortT, and its probability will depend on the law ofL·as a process on0, T.

In line with the remarks we made above, earlier papers on applications of large- deviation theory to credit risk, mainly address theasymptotics of thedistribution of the loss process at a single point in time, see, for example,5,6. The former paper considers, in addition, also the probability that the increments of the loss process exceed a certain level.

Other approaches to quantifying the tail distribution of the losses have been taken by 7, who use extreme-value theorysee8for a background,9,10, where the authors consider saddle point approximations to the tails of the loss distribution. Numerical and simulation techniques for credit risk can be found in, for example,11. The first contribution of our work concerns a so-called sample-path large deviation principleLDPfor the average cumulative losses for large portfolios. Loosely speaking, such an LDP means that, withLn·denoting the loss process whennobligors are involved, we can compute the logarithmic asymptotics fornlargeof the average or normalized loss processLn·/nbeing in a set of trajectoriesA:

nlim→ ∞

1 nlogP

1

nLn·∈A

, 1.2

we could, for instance, pick a setAthat corresponds to the event1.1. Most of the sample- path LDPs that have been developed so far involve stochastic processes with independent or nearly-independent increments, see, for instance, the results by Mogul’ski˘ıfor random walks 12, de Acosta for L´evy processes 13, and Chang14 for weakly correlated processes;

results for processes with a stronger correlation structure are restricted to special classes of processes, such as Gaussian processes, see, for example, 15. It is observed that our loss process is not covered by these results, and therefore new theory had to be developed. The proof of our LDP relies on “classical” large deviation results such as Cram´er’s theorem, Sanov’s theorem, Mogul’ski˘ı’s theorem, in addition, the concept of epi-convergence16is relied upon.

Our second main result focuses specifically on the event1.1of everbefore some time horizonTexceeding a given barrier functionζ·. Whereas we so far considered, inherently imprecise, logarithmic asymptotics of the type displayed in1.2, we can now compute the so-called exact asymptotics; we identify an explicit functionfnsuch thatfn/pn → 1 as n → ∞, where pn is the probability of our interest. As is known from the literature, it is in general substantially harder to find exact asymptotics than logarithmic asymptotics.

(3)

The proof of our result uses the fact that, after discretizing time, the contribution of just a single time epoch dominates, in the sense that there is atsuch that

P1/nLntζt

pn −→1, with pn:P

∃t: 1

nLnt≥ζt

. 1.3

Thistcan be interpreted as the most likely epoch of exceedingζ·.

Turning back to the setting of credit risk, both of the results we present are derived in a setup where all obligors in the portfolio are i.i.d., in the sense that they behave independently and stochastically identically. A third contribution of our work concerns a discussion on how to extend our results to cases where the obligors are dependent meaning that they, in the terminology of5, react to the same “macroenvironmental” variable, conditional upon which they are independent again. We also treat the case of obligor-heterogeneity: we show how to extend the results to the situation of multiple classes of obligors.

The paper is structured as follows. In Section 2 we introduce the loss process and we describe the scaling under which we work. We also recapitulate a couple of relevant large-deviation results. Our first main result, the sample-path LDP for the cumulative loss process, is stated and proved in Section 3. Special attention is paid to, easily-checkable, sufficient conditions under which this result holds. As argued above, the LDP is a generally applicable result, as it yields an expression for the decay rate of any probability that depends on the entire sample path. Then, in Section 4, we derive the exact asymptotic behavior of the probability that, at some point in time, the loss exceeds a certain threshold, that is, the asymptotics ofpn, as defined in1.3. After this we derive a similar result for the increments of the loss process. Eventually, in Section5, we discuss a number of possible extensions to the results we have presented. Special attention is given to allowing dependence between obligors, and to different classes of obligors each having its own specific distributional properties. In the appendix we have collected a number of results from the literature in order to keep the exposition of the paper self-contained.

2. Notation and Definitions

The portfolios of banks and insurance companies are typically very large; they may consist of several thousands of assets. It is therefore computationally impossible to estimate the risks for each element, or obligor, in a portfolio. This explains why one attempts to assess the aggregated losses resulting from defaults, for example, bankruptcies, failure to repay loans or insurance claims, for the portfolio as a whole. The risk in the portfolio is then measured through thisaggregateloss process. In the following sections we introduce the loss process and the portfolio constituents more formally.

2.1. Loss Process

LetΩ,F,Pbe the probability space on which all random variables below are defined. We assume that the portfolio consists ofnobligors, and we denote the default time of obligori

(4)

byτi. Further, we writeUifor the loss incurred on a default of obligori. We then define the cumulative loss processLnas

Lnt:n

i1

UiZit, 2.1

whereZit 1i≤t} is the default indicator of obligori. We assume that the loss amounts Ui≥0 are i.i.d., and that the default timesτi≥0 are i.i.d. as well. In addition, we assume that the loss amounts and the default times are mutually independent. In the remainder of this paper,UandZtdenote generic random variables with the same distribution as theUiand Zit, respectively.

Throughout this paper we assume that the defaults only occur on the time gridN; in Section5, we discuss how to deal with the default epochs taking continuous values. In some cases we explicitly consider a finite time grid, say{1,2, . . . , N}. The extension of the results we derive to a more general grid{0< t1< t2<· · ·< tN}is completely trivial. The distribution of the default times, for eachj, is denoted by

pj :P τj

, 2.2

Fj :P τj

j

i1

pi. 2.3

Given the distribution of the loss amountsUiand the default timesτi, our goal is to investigate the loss process. Many of the techniques that have been developed so far, first fix a time T typically one year, and then stochastic properties of the cumulative loss at timeT, that is,LnT, are studied. Measures such as value at risk and economic capital are examples of these “one-dimensional” characteristics. Many interesting measures, however, involve properties of the entire path of the loss process rather than those of just one time epoch, examples being the probability thatLn·exceeds some barrier functionζ·for some tsmaller than the horizonT, or the probability thatduring a certain periodthe loss always stays above a certain level. The event corresponding to the former probability might require the bank to attract more capital, or worse, it might lead to the bankruptcy of this bank. The event corresponding to the latter event might also lead to the bankruptcy of the bank, as a long period of stress may have substantial negative implications. We conclude that having a handle on these probabilities is therefore a useful instrument when assessing the risks involved in the bank’s portfolios.

As mentioned above, the number of obligorsnin a portfolio is typically very large, thus prohibiting analyses based on the specific properties of the individual obligors. Instead, it is more natural to study the asymptotical behavior of the loss process asn → ∞. One could rely on a central-limit-theorem-based approach, but in this paper we focus on rare events, by using the theory of large deviations.

In the following subsection we provide some background of large-deviation theory, and we define a number of quantities that are used in the remainder of this paper.

(5)

2.2. Large Deviation Principle

In this section we give a short introduction to the theory of large deviations. Here, in an abstract setting, the limiting behavior of a family of probability measures{μn}on the Borel setsBof a complete separable metric space, a Polish space,X, dis studied, asn → ∞. This behavior is referred to as the large deviation principleLDP, and it is characterized in terms of a rate function. The LDP states lower and upper exponential bounds for the value that the measuresμnassign to sets in a topological spaceX. Below we state the definition of the rate function that has been taken from17.

Definition 2.1. A rate function is a lower semicontinuous mappingI : X → 0,∞, for all α∈0,∞the level setΨIα:{x|Ixα}is a closed subset ofX. A good rate function is a rate function for which all the level sets are compact subsets ofX.

With the definition of the rate function in mind we state the large deviation principle for the sequence of measure{μn}.

Definition 2.2. We say thatn}satisfies the large deviation principle with a rate function

if

i upper boundfor any closed setF⊆ X

lim sup

n→ ∞

1

nlogμnF≤ −inf

x∈FIx, 2.4

ii lower boundfor any open setG⊆ X

lim inf

n→ ∞

1

nlogμnG≥ −inf

x∈GIx. 2.5

We say that a family of random variablesX {Xn}, with values inX, satisfies an LDP with rate functionIX·if and only if the laws{μXn}satisfy an LDP with rate functionIX, whereμXn is the law ofXn.

The so-called Fenchel-Legendre transform plays an important role in expressions for the rate function. Let for an arbitrary random variableX, the logarithmic moment generating function, sometimes referred to as cumulant generating function, be given by

ΛXθ:logMXθ logE eθX

≤ ∞, 2.6

forθ∈R. The Fenchel-Legendre transformΛXofΛXis then defined by ΛXx:sup

θ

θx−ΛXθ. 2.7

We sometimes say thatΛXis the Fenchel-Legendre transform ofX.

(6)

The LDP from Definition2.2provides upper and lower bounds for the log-asymptotic behavior of measuresμn. In case of the loss process2.1, fixed at some timet, we can easily establish an LDP by an application of Cram´er’s theoremTheoremA.1. This theorem yields that the rate function is given byΛUZt·, whereΛUZt·is the Fenchel-Legendre transform of the random variableUZt.

The results we present in this paper involve either ΛU· Section3, which corre- sponds to i.i.d. loss amountsUionly, orΛUZt· Section4, which corresponds to those loss amounts up to timet. In the following section we derive an LDP for the whole path of the loss process, which can be considered as an extension of Cram´er’s theorem.

3. A Sample-Path Large Deviation Result

In the previous section we have introduced the large deviation principle. In this section we derive a sample-path LDP for the cumulative loss process2.1. We consider the exponential decay of the probability that the path of the loss processLn·is in some setA, as the sizen of the portfolio tends to infinity.

3.1. Assumptions

In order to state a sample-path LDP, we need to define the topology that we work on. To this end we define the space S of all nonnegative and nondecreasing functions on TN {1,2, . . . , N},

S: f:TN → R0 |0≤fifi1for i < N

. 3.1

This set is identified with the spaceRN :{x∈RN |0 ≤xixi1 fori < N}. The topology on this space is the one induced by the supremum norm

f

max

i1,...,Nfi. 3.2

As we work on a finite-dimensional space, the choice of the norm is not important, as any other norm onSwould result in the same topology. We use the supremum norm as this is convenient in some of the proofs in this section.

We identify the space of all probability measures onTNwith the simplexΦ:

Φ:

ϕ∈RN|N

i1

ϕi1, ϕi ≥0 foriN

. 3.3

For a givenϕ∈Φwe denote the cumulative distribution function byψ, that is,

ψii

j1

ϕj, foriN, 3.4

note thatψ ∈ SandψN 1.

(7)

Furthermore, we consider the loss amountsUi as introduced in Section2.1, aϕ ∈ Φ with cdfψ, and a sequence ofϕn∈Φ, each with cdfψn, such thatϕnϕasn → ∞, meaning thatϕniϕifor alliN. We define two families of measuresμnandνn:

μnA:P

⎜⎝

⎝1 n

i j1

Uj

N

i1

A

⎟⎠, 3.5

νnA:P

⎜⎝

⎝1 n

in j1

Uj

N

i1

A

⎟⎠, 3.6

where A ∈ B : BRN and x : sup{k ∈ N | kx}. Below we state an assumption under which the main result in this section holds. This assumption refers to the definition of exponential equivalence, which can be found in DefinitionA.2.

Assumption 1. Letϕ,ϕnbe as above. We assume thatϕnϕand moreover that the measuresμn andνnas defined in3.5and3.6, respectively, are exponentially equivalent.

From Assumption1, we learn that the differences between the two measuresμnand νn go to zero at a “superexponential” rate. In the next section, in Lemma3.3, we provide a sufficient condition, that is, easy to check, under which this assumption holds.

3.2. Main Result

The assumptions and definitions in the previous sections allow us to state the main result of this section. We show that the average loss processLn·/nsatisfies a large deviation principle as in Definition2.2. It is noted that various expressions for the associated rate function can be found. Directly from the multivariate version of Cram´er’s theorem 17, Section 2.2.2, it is seen that, under appropriate conditions, a large deviations principle applies with rate function

Ix sup

θ∈RN

N

j1

θjxj−logEexp

N

j1

θjVj

, 3.7

whereVjis a generic random variable distributed asUiZij. In this paper we choose to work with another rate function that has the important advantage that it gives us considerably more precise insight into the system conditional of the rare event of interest occurring. We return to this issue in greater detail in Remark3.6.

The large deviations principle allows us to approximate a large variety of probabilities related to the average loss process, such as the probability that the loss process stays above a certain time-dependent level or the probability that the loss process exceeds a certain level before some given point in time.

(8)

Theorem 3.1. WithΦas in3.3and under Assumption1, the average loss process, Ln·/nsatisfies an LDP with rate functionIU,p. Here, forx∈RN,IU,pis given by

IU,px: inf

ϕ∈Φ

N i1

ϕi

log ϕi

pi

ΛU Δxi

ϕi

, 3.8

withΔxi :xixi−1andx0:0.

Observing the rate function for this sample-path LDP, we see that the effects of the default timesτi and the loss amountsUiare nicely decoupled into the two terms in the rate function, one involving the distribution of the default epochτ the “Sanov term”, cf. 17, Theorem 6.2.10, the other one involving the incurred loss sizeUthe “Cram´er term”, cf.

17, Theorem 2.2.3. Observe that we recover Cram´er’s theorem by considering a time grid consisting of a single time point, which means that Theorem3.1extends Cram´er’s result. We also remark that, informally speaking, the optimizingϕ∈Φin3.8can be interpreted as the

“most likely” distribution of the loss epoch, given that the path ofLn·/nis close tox.

As a sanity check we calculate the value of the rate functionIU,pxfor the “average path” ofLn·/n, given byxj EUFjforjN, whereFjis the cumulative distribution of the default times as given in2.3; this path should give a rate function equal to 0. To see this, we first remark that clearlyIU,px ≥0 for allx, since both the Sanov term and the Cram´er term are nonnegative. This yields the following chain of inequalities:

0≤IU,px inf

ϕ∈Φ

N i1

ϕi

log ϕi

pi

ΛU

EUpi

ϕi

ϕpN

i1

pi

log

pi

pi

ΛU

EUpi

pi

N

i1

piΛUEU ΛUEU 0,

3.9

where we have used that forEU < ∞, it always holds thatΛUEU 0 cf.17, Lemma 2.2.5. The inequalities above thus show that if the “average path” x lies in the set of interest, then the corresponding decay rate is 0, meaning that the probability of interest decays subexponentially.

In the proof of Theorem3.1we use the following lemma, which is related to the concept of epi-convergence, extensively discussed in16. After this proof, in which we use a “bare hands” approach, we discuss alternative, more sophisticated ways to establish Theorem3.1.

Lemma 3.2. Letfn, f : D → R, withD ⊂ Rm compact. Assume that for all xD and for all xnxinDwe have

lim sup

n→ ∞ fnxnfx. 3.10

(9)

Then we have

lim sup

n→ ∞ sup

x∈Dfnx≤sup

x∈Dfx. 3.11

Proof. Let fn supx∈Dfnx, f supx∈Dfx. Consider a subsequence fnk → lim supn→ ∞fn. Let > 0 and choose xnk such that fnk < fnkxnk for all k. By the compactness ofD, there exists a limit pointxDsuch that along a subsequencexnkjx.

By the hypothesis3.10we then have lim sup

n→ ∞ fn≤lim supfnkj xnkj

εfx εfε. 3.12

Letε↓0 to obtain the result.

Proof of Theorem3.1. We start by establishing an identity from which we show both bounds.

We need to calculate the probability

P 1

nLn·∈A

P 1

nLn1, . . . ,1 nLnN

A

, 3.13

for certainA ∈ B. For each pointjon the time gridTN we record by the “default counter”

Kn,j ∈ {0, . . . , n}the number of defaults at timej:

Kn,j :# i∈ {1, . . . , n} |τij

. 3.14

These counters allow us to rewrite the probability to

P 1

nLn·∈A

E

⎣P

⎝1 n

Kn,1

j1

Uj, . . . ,1 n

Kn,1···K n,N

j1

Uj

⎠∈A|Kn

k1···kNn

PKn,ikiforiN×P

⎜⎝

⎝1 n

mi

j1

Uj

N

i1

A

⎟⎠,

3.15

where mi : i

j1kj and the loss amounts Uj have been ordered, such that the first Uj corresponds to the losses at time 1, and so forth.

Upper Bound

Starting from Equality3.15, let us first establish the upper bound of the LDP. To this end, letFbe a closed set and consider the decay rate

lim sup

n→ ∞

1 nlogP

1

nLn·∈F

. 3.16

(10)

An application of LemmaA.3together with3.15implies that3.16equals

lim sup

n→ ∞

1 nlogP

1

nLn·∈F

lim sup

n→ ∞ max

ki: kin

1 n

⎢⎣logP Kn,i

n ki

n, iN

logP

⎜⎝

⎝1 n

mi

j1

Uj

N

i1

F

⎟⎠

⎥⎦.

3.17

Next, we replace the dependence onnin the maximization by maximizing over the setΦas in3.3. In addition, we replace thekiin3.17by

ϕn,i:

ii−1

n , 3.18

where theψihas been defined in3.4. As a result,3.16reads

lim sup

n→ ∞ sup

ϕ∈Φ

1 n

⎢⎣logP Kn,i

n ϕn,i, iN

logP

⎜⎝

⎝1 n

i j1

Uj

N

i1

F

⎟⎠

⎥⎦. 3.19

Note that3.16equals3.19, since for eachnand vectork1, . . . , kN∈NN, withN

i1kin, there is aϕ ∈Φwithϕi ki/n. On the other hand, we only cover outcomes of this form by rounding offtheϕi.

We can bound the first term in this expression from above using LemmaA.5, which implies that the decay rate3.16is majorized by

lim sup

n→ ∞ sup

ϕ∈Φ

⎢⎣−N

i1

ϕn,ilog ϕn,i

pi

1

nlogP

⎜⎝

⎝1 n

i j1

Uj

N

i1

F

⎟⎠

⎥⎦. 3.20

Now note that calculating the lim sup in the previous expression is not straightforward due to the supremum overΦ. The idea is therefore to interchange the supremum and the lim sup, by using Lemma3.2. To apply this lemma we first introduce

fn

ϕ :−N

i1

ϕn,ilog ϕn,i

pi

1

nlogP

⎜⎝

⎝1 n

i j1

Uj

N

i1

F

⎟⎠,

f ϕ

:−inf

x∈F

N i1

ϕi

log

ϕi

pi

ΛU Δxi

ϕi

,

3.21

(11)

and note thatΦis a compact subset ofRn. We have to show that for any sequenceϕnϕ Condition3.10is satisfied, that is,

lim sup

n→ ∞ fn ϕn

f ϕ

, 3.22

such that the conditions of Lemma3.2are satisfied. We observe, withϕni as in3.18andψin as in3.4withϕreplaced byϕn, that

lim sup

n→ ∞ fn ϕn

≤lim sup

n→ ∞

!

N

i1

ϕnn,ilog

!ϕnn,i pi

""

lim sup

n→ ∞

1 nlogP

⎜⎝

⎝1 n

ni j1

Uj

N

i1

F

⎟⎠.

3.23

Sinceϕnϕ and since ϕnn,i differs at most by 1/n fromϕni, it immediately follows that

ϕnn,iϕi. For an arbitrary continuous functiongwe thus havenn,igϕi. This implies that

lim sup

n→ ∞

!

N

i1

ϕn,ilog

!ϕnn,i pi

""

N

i1

ϕilog ϕi

pi

. 3.24

Inequality3.22is established once we have shown that

lim sup

n→ ∞

1 nlogP

⎜⎝

⎝1 n

ni j1

Uj

N

i1

F

⎟⎠≤ −inf

x∈F

N i1

ϕiΛU Δxi

ϕi

. 3.25

By Assumption1, we can exploit the exponential equivalence together with TheoremA.7, to see that3.25holds as soon as we have that

lim sup

n→ ∞

1 nlogP

⎜⎝

⎝1 n

i j1

Uj

N

i1

F

⎟⎠≤ −inf

x∈F

N i1

ϕiΛU Δxi

ϕi

. 3.26

But this inequality is a direct consequence of LemmaA.6, and we conclude that3.25holds.

Combining3.24with3.25yields

lim sup

n→ ∞ fn ϕn

≤ −N

i1

ϕilog ϕi

pi

−inf

x∈F

N i1

ϕiΛU Δxi

ϕi

−inf

x∈F

N i1

ϕi

log

ϕi

pi

ΛU Δxi

ϕi

f ϕ

,

3.27

(12)

so that indeed the conditions of Lemma3.2are satisfied, and therefore

lim sup

n→ ∞ sup

ϕ∈Φfn

ϕ

≤sup

ϕ∈Φf ϕ

sup

ϕ∈Φ

!

−infx∈F

N i1

ϕi

log ϕi

pi

ΛU Δxi

ϕi

"

−inf

x∈Finf

ϕ∈Φ

N i1

ϕi

log

ϕi

pi

ΛU

Δxi

ϕi

−inf

x∈FIU,px.

3.28

This establishes the upper bound of the LDP.

Lower Bound

To complete the proof, we need to establish the corresponding lower bound. LetGbe an open set and consider

lim inf

n→ ∞

1 nlogP

1

nLn·∈G

. 3.29

We apply Equality3.15to this lim inf, withAreplaced byG, and we observe that this sum is larger than the largest term in the sum, which shows thatwhere we directly switch to the enlarged spaceΦthe decay rate3.29majorizes

lim inf

n→ ∞ sup

ϕ∈Φ

1 n

⎜⎝logP 1

nKn,iϕn,i, iN

logP

⎜⎝

⎝1 n

i j1

Uj

N

i1

G

⎟⎠

⎟⎠. 3.30

Observe that for any sequence of functions hn· it holds that lim infnsupxhnx ≥ lim infnhn#xfor allx, so that we obtain the evident inequality#

lim inf

n→ ∞ sup

x

hnx≥sup

x

lim inf

n→ ∞ hnx. 3.31

This observation yields that the decay rate of interest3.29is not smaller than

sup

ϕ∈Φ

⎜⎝lim inf

n→ ∞

1 nlogP

1

nKn,iϕn,i, iN

lim inf

n→ ∞

1 nlogP

⎜⎝

⎝1 n

i j1

Uj

N

i1

G

⎟⎠

⎟⎠,

3.32

(13)

where we have used that lim infnxnyn≥lim infnxnlim infnyn. We apply LemmaA.5to the first lim inf in3.32, leading to

lim inf

n→ ∞

1 nlogP

1

nKn,iϕn,i, iN

≥lim inf

n→ ∞

!

N

i1

ϕn,ilog ϕn,i pi

N

n logn1

"

N

i1

ϕilog ϕi

pi

,

3.33

since logn1/n → 0 asn → ∞. The second lim inf in3.32can be bounded from below by an application of LemmaA.6. SinceGis an open set, this lemma yields

lim inf

n→ ∞

1 nlogP

⎜⎝

⎝1 n

i j1

Uj

N

i1

G

⎟⎠≥ −inf

x∈G

N i1

ϕiΛU

xixi−1 ϕi

. 3.34

Upon combining3.33and3.34, we see that we have established the lower bound:

lim inf

n→ ∞

1 nlogP

1

nLn·∈G

≥ −inf

ϕ∈Φinf

x∈G

!N

i1

ϕi

log ϕi

pi

ΛU

xixi−1 ϕi

"

−inf

x∈GIU,px.

3.35

This completes the proof of the theorem.

In order to apply Theorem3.1, one needs to check that Assumption1holds. In general, this could be a quite cumbersome exercise. In Lemma3.3below, we provide a sufficient, easy- to-check condition under which this assumption holds.

Lemma 3.3. Assume that for allθ∈R:ΛUθ<∞. Then Assumption1holds.

Remark 3.4. The assumption we make in Lemma 3.3, that is, that the logarithmic moment generating function is finite everywhere, is a common assumption in large deviations theory.

We remark that for instance Mogul’ski˘ı’s theorem17, Theorem 5.1.2, also relies on this assumption; this theorem is a sample-path LDP for

Ynt: 1 n

nt

i1

Xi, 3.36

on the interval0,1. In Mogul’ski˘ı’s result, theXiare assumed to be i.i.d; in our model we have thatLnt n

i1UiZit/n, so that our sample-path result clearly does not fit into the setup of Mogul’ski˘ı’s theorem.

(14)

Remark 3.5. In Lemma3.3it was assumed thatΛUθ <∞, for allθ ∈R, but an equivalent condition is

xlim

ΛUx

x ∞. 3.37

In other words, this alternative condition can be used instead of the condition stated in Lemma3.3. To see that both requirements are equivalent, make the following observations.

LemmaA.4states that3.37is implied by the assumption in Lemma3.3. In order to prove the converse, assume that3.37holds, and that there is a 0< θ0<∞for whichΛUθ0 ∞.

Without loss of generality we can assume thatΛUθis finite forθ < θ0and infinite forθθ0. Forx >EU, the Fenchel-Legendre transform is then given by

ΛUx sup

0<θ<θ0

θx−ΛUθ. 3.38

SinceU≥0 andΛU0 0, we know thatΛUθ≥0 for 0< θ < θ0, and hence ΛUx

xθ0, 3.39

which contradicts with the assumption that this ratio tends to infinity asx → ∞, and thus establishing the equivalence.

Proof of Lemma3.3. Letϕnϕfor some sequence ofϕn ∈ Φandϕ ∈ Φ. We introduce two families of random vectors{Yn}and{Zn},

Yn:

⎝1 n

i j1

Uj

N

i1

, Zn:

⎝1 n

in j1

Uj

N

i1

, 3.40

which have lawsμnandνn, respectively, as in3.5-3.6. Sinceϕnϕwe know that for any ε >0 there exists anMεsuch that for alln > Mεwe have that maxiniϕi|< ε/N, and thus

inψi|< ε.

We have to show that for anyδ >0,

lim sup

n→ ∞

1

nlogPYnZn> δ −∞. 3.41

ForiN, consider the absolute difference betweenYn,iandZn,i, that is,

|Yn,iZn,i| 1

n

i j1

Uj− 1 n

ni j1

Uj

. 3.42

(15)

Next we have that for anyn > Mε it holds that|nψini| < nε, which yields for allithe upper bound

in

i <nε 2, 3.43

since the rounded numbers differ at most by 1 from their real counterparts. This means that the difference of the two sums in3.42can be bounded by at mostnε2 elements of theUj, which are for convenience denoted byUj. Recalling that theUjare nonnegative, we obtain

i1,...,Nmax 1

n

i j1

Uj− 1 n

in j1

Uj ≤ 1

n

nε2

j1

Uj. 3.44

Next we bound the probability that the difference exceedsδ, by using the above inequality:

PYnZn> δ≤ P

⎝1 n

nε2

j1

Uj > δ

⎠≤ E

expθU1 nε2e−nδθ, 3.45

where the last inequality follows from the Chernoffbound 17, Eqn.2.2.12for arbitrary θ >0. Taking the log of this probability, dividing byn, and taking the lim sup on both sides results in

lim sup

n→ ∞

1

nlogPYnZn> δεΛU1θ−δθ. 3.46 By the assumption,ΛU1θ<∞for allθ. Thus,ε → 0 yields

lim sup

n→ ∞

1

nlogPYnZn> δ≤ −δθ. 3.47 Asθwas arbitrary, the exponential equivalence follows by lettingθ → ∞.

Remark 3.6. Large deviations analysis provides us with insight into the behavior of the system conditional on the rare event under consideration happening. In this remark we compare the insight we gain from the rate functions3.7and 3.8. We consider the decay rate of the probability of the rare event that the average loss processLn·/nis in the setA, and do so by minimizing the rate function overxAwherexdenotes the optimizing argument.

Let, for ease, the random vector UiZi1, . . . , UiZiN have a density, given by by fy1, . . . , yN. Then well-known large deviations reasoning yields that, conditional on the rare event A, the vector UiZi1, . . . , UiZiN behaves as being sampled from an exponentially twisted distribution with density

f

y1, . . . , yN

· eNj1θjyj EexpN

j1θjVj, 3.48

whereθis the optimizing argument in3.7withxx.

(16)

Importantly, the rate function we identified in3.8gives more detailed information on the system conditional on being in the rare setA. The default times of the individual obligors are to be sampled from the distributionϕ1, . . . , ϕN withϕ ∈ Φthe optimizing argument in3.8, whereas the claim size of an obligor defaulting at timeihas density

fU

y eθiy

EeθiU, 3.49

wherefU·denotes the density ofU, and

θi:arg sup

θ

! θΔxi

ϕi −ΛUθ

"

. 3.50

The rate functions 3.7 and 3.8 are of comparable complexity, as both correspond to an N-dimensional optimization where 3.8 also involves the evaluation of the Fenchel- Legendre transformΛ·, which is a single-dimensional maximization of low computational complexity.

We conclude this section with some examples.

Example 3.7. Assume that the loss amounts have finite support, say on the interval 0, u.

Then we clearly have

ΛUθ logE eθU

θu <∞. 3.51

So for any distribution with finite support, the assumption for Lemma3.3is satisfied, and thus Theorem 3.1 holds. Here, the i.i.d. default times, τi, can have an arbitrary discrete distribution on the time grid{1, . . . , N}.

In practical applications, onealwayschooses a distribution with finite support for the loss amounts, since the exposure to every obligor is finite. Theorem3.1thus clearly holds for anyrealisticmodel of the loss given default.

An explicit expression for the rate function 3.8, or even the Fenchel-Legendre transform, is usually not available. On the other hand one can use numerical optimization techniques to calculate these quantities.

We next present an example to which Lemma3.3applies.

Example 3.8. Assume that the loss amount U is measured in a certain unit, and takes on the valuesu,2u, . . .for someu > 0. Assume that it has a distribution of Poisson type with parameterλ >0, in the sense that fori0,1, . . .,

PU i1u e−λλi

i!. 3.52

(17)

It is then easy to check thatΛUθ θuλeθu−1, being finite for allθ. Further calculations yield

ΛUx x u−1

log 1

λ x

u−1

x u−1

λ, 3.53

for allx > u, and ∞ otherwise. Dividing this expression byx and lettingx → ∞, we observe that the resulting ratio tends to∞. As a consequence, Remark3.5now entails that Theorem 3.1 applies. It can also be argued that for any distributionU with tail behavior comparable to that of a Poisson distribution, Theorem3.1applies as well.

4. Exact Asymptotic Results

In the previous section we have established a sample-path large deviation principle on a finite time grid; this LDP provides us with logarithmic asymptotics of the probability that the sample path ofLn·/nis contained in a given set, sayA. The results presented in this section are different in several ways. In the first place, we derive exact asymptotics rather than logarithmic asymptotics. In the second place, our time domain is not assumed to be finite, instead, we consider all integer numbers,N. The price to be paid is that we restrict ourselves to special setsA, namely, those corresponding to the loss processor the increment of the loss processexceeding a given function. We work under the setup that we introduced in Section2.1.

4.1. Crossing a Barrier

In this section we consider the asymptotic behavior of the probability that the loss process at some point in time is above a time-dependent levelζ. More precisely, we consider the set

A: f:N → R0 | ∃t∈N:ftζt

, 4.1

for some functionζtsatisfying

ζt>EUZt EUFt ∀t∈N, 4.2

withFtas in2.3. If we would consider a functionζthat does not satisfy4.2, we are not in a large deviations setting, in the sense that the probability of the event{Ln·/n ∈ A}

converges to 1 by the law of large numbers. In order to obtain a more interesting result, we thus limit ourselves to levels that satisfy4.2. For such levels we state the first main result of this section.

Theorem 4.1. Assume that

there is a uniquet∈Nsuch thatIUZt min

t∈N IUZt, 4.3

(18)

and that

lim inf

t→ ∞

IUZt

logt >0, 4.4

whereIUZt supθ{θζt−ΛUZtθ} ΛUZtζt. Then

P 1

nLn·∈A

e−nIUZtC

n

1O 1

n

, 4.5

forAas in4.1andσis such thatΛUZtσ ζt. The constantCfollows from the Bahadur- Rao theorem (TheoremA.8), withCCUZt, ζt.

Before proving the result, which will rely on arguments similar to those in18, one first discusses the meaning and implications of Theorem4.1. In addition, one reflects on the role played by the assumptions. One does so by a sequence of remarks.

Remark 4.2. Comparing Theorem4.1to the Bahadur-Rao theoremTheoremA.8, we observe that the probability of a sample mean exceeding a rare value has the same type of decay as the probability of our interesti.e., the probability that the normalized loss processLn·/n ever exceeds some functionζ. This decay looks likeCe−nI/

nfor positive constantsCand I. This similarity can be explained as follows.

First, assume that the probability of our interest is actually the probability of a union events. Evidently, this probability is larger than the probability of any of the events in this union, and hence also larger than the largest among these:

P 1

nLn·∈A

≥sup

t∈NP 1

nLnt≥ζt

. 4.6

Theorem 4.1 indicates that the inequality in 4.6 is actually tight under the conditions stated. Informally, this means that the contribution of the maximizingtin the right-hand side of4.6, sayt, dominates the contributions of the other time epochs asngrows large. This essentially says that given that the rare event under consideration occurs, with overwhelming probability it happens at timet.

As is clear from the statement of Theorem4.1, two assumptions are needed to prove the claim; we now briefly comment on the role played by these.

Remark 4.3. Assumption 4.3 is needed to make sure that there is not a time epoch t, different from t, having a contribution of the same order ast. It can be verified from our proof that if the uniqueness assumption is not met, the probability under consideration remains asymptotically proportional to e−nI/

n, but we lack a clean expression for the proportionality constant.

(19)

Assumption4.4has to be imposed to make sure that the contribution of the “upper tail”, that is, time epochst ∈ {t1, t2, . . .}, can be neglected; more formally, we should have

P

∃t∈ {t1, t2, . . .}: 1

nLnt≥ζt

o

P 1

nLn·∈A

. 4.7

In order to achieve this, the probability that the normalized loss process exceedsζfor larget should be sufficiently small.

Remark 4.4. We now comment on what Assumption4.4means. Clearly, ΛUZtθ log

Pτ≤tE eθU

Pτ > t

≤logE eθU

, 4.8

asθ≥0; the limiting value astgrows is actually logEeθUifPτ <∞ 1. This entails that IUZt ΛUZtζt≥ΛUζt sup

θ

θζt−logE eθU

. 4.9

We observe that Assumption4.4is fulfilled if lim inft→ ∞ΛUζt/logt > 0, which turns out to be valid under extremely mild conditions. Indeed, relying on LemmaA.4, we have that in great generality it holdsΛUx/x → ∞asx → ∞. Then clearly anyζt, for which lim inftζt/logt >0, satisfies Assumption4.4, since

lim inf

t→ ∞

ΛUζt

logt lim inf

t→ ∞

ΛUζt ζt

ζt

logt. 4.10

Alternatively, ifUis chosen distributed exponentially with meanλwhich does not satisfy the conditions of LemmaA.4, thenΛUt λt−1−logλt, such that we have that

lim inf

t→ ∞

IU logt

logt λ >0. 4.11

Barrier functionsζthat grow at a rate slower than logt, such as log logt, are in this setting clearly not allowed.

Proof of Theorem4.1. We start by rewriting the probability of interest as

P 1

nLn·∈A

P

∃t∈N: Lnt nζt

. 4.12

For an arbitrary instantkinNwe have P

∃t∈N:Lnt nζt

≤P

∃t≤k: Lnt nζt

P

∃t > k:Lnt nζt

. 4.13

参照

関連したドキュメント

In section 3 all mathematical notations are stated and global in time existence results are established in the two following cases: the confined case with sharp-diffuse

We have formulated and discussed our main results for scalar equations where the solutions remain of a single sign. This restriction has enabled us to achieve sharp results on

Compactly supported vortex pairs interact in a way such that the intensity of the interaction decays with the inverse of the square of the distance between them. Hence, vortex

The main purpose of this paper is to show, under the hypothesis of uniqueness of the maximizing probability, a Large Deviation Principle for a family of absolutely continuous

We present sufficient conditions for the existence of solutions to Neu- mann and periodic boundary-value problems for some class of quasilinear ordinary differential equations.. We

From (3.2) and (3.3) we see that to get the bound for large deviations in the statement of Theorem 3.1 it suffices to obtain a large deviation bound for the continuous function ϕ k

“Breuil-M´ezard conjecture and modularity lifting for potentially semistable deformations after

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A