ハンドアウト toyo_classes

92 

Loading....

Loading....

Loading....

Loading....

Loading....

全文

(1)

Performance Evaluation

based on Stochastic Analysis

Hiroshi Toyoizumi

1

April 1, 2010

(2)

Contents

1 Basics in Probability Theory 8

1.1 Why Probability? . . . 8

1.2 Probability Space . . . 8

1.3 Conditional Probability and Independence . . . 11

1.4 Random Variables . . . 13

1.5 Expectation, Variance and Standard Deviation . . . 14

1.6 How to Make a Random Variable. . . 17

1.7 News-vendor Problem, “How many should you buy?” . . . 17

1.8 Covariance and Correlation . . . 18

1.9 Value at Risk . . . 19

1.10 References. . . 22

1.11 Exercises . . . 23

2 Markov chain 24 2.1 Series of Random Variables. . . 24

2.2 Discrete-time Markov chain . . . 24

2.3 Time Evolution of Markov Chain. . . 26

2.4 Stationary State . . . 26

2.5 Matrix Representation . . . 27

2.6 Stock Price Dynamics Evaluation Based on Markov Chain . . . . 28

2.7 Google’s PageRank and Markov Chain . . . 29

3 Birth and Death process and Poisson Process 33 3.1 Definition of Birth and Death Process . . . 33

3.2 Differential Equations of Birth and Death Process . . . 34

3.3 Infinitesimal Generator . . . 34

3.4 System Dynamics . . . 35

3.5 Poisson Process . . . 35

(3)

CONTENTS 3

3.6 Why we use Poisson process? . . . 36

3.7 Z-trnasform of Poisson process . . . 37

3.8 Independent Increment . . . 37

3.9 Interearrival Time . . . 37

3.10 Memory-less property of Poisson process . . . 37

3.11 PASTA: Poisson Arrival See Time Average . . . 38

3.12 Excercise . . . 38

4 Introduction of Queueing Systems 40 4.1 Foundation of Performance Evaluation . . . 40

4.2 Starbucks . . . 41

4.3 Specification of queueing systems . . . 41

4.4 Little’s formula . . . 42

4.5 Lindley equation and Loynes variable . . . 44

4.6 Exercise . . . 45

5 PASTA 47 5.1 What will be seen by customers . . . 47

5.2 Stochastic Intensity . . . 48

5.3 Lack of Bias Assumption . . . 49

5.4 Reverse Stochastic Intensity . . . 49

5.5 Poisson arrivals see time average . . . 50

5.6 Excercise . . . 51

6 M/M/1queue 52 6.1 M/M/1 queue as a birth and death process. . . 52

6.2 Utilization . . . 53

6.3 Waiting Time Estimation . . . 53

6.3.1 Waiting Time by Little‘s Formula . . . 54

6.3.2 Waiting Time Distribution ofM/M/1 Queues . . . 54

6.4 Example . . . 55

6.5 Excercise . . . 56

7 Reversibility 57 7.1 Output from Queue . . . 57

7.2 Reversibibility. . . 57

7.2.1 Definition of Reversibility . . . 57

(4)

4 CONTENTS

7.3 Output fromM/M/1 queue . . . 59

7.4 Excercise . . . 59

8 Network of Queues 60 8.1 Open Network of queues . . . 60

8.2 Global Balance of Network . . . 60

8.3 Traffic Equation . . . 61

8.4 Product Form Solution . . . 61

8.5 Exercises . . . 63

9 Examples of Queueing System Comparison 64 9.1 Single Server vs Tandem Servers . . . 64

9.2 M/M/2 queue . . . 65

9.3 M/M/1 VSM/M/2 . . . 66

9.4 TwoM/M/1 VS OneM/M/2 . . . 67

9.5 Exercises . . . 68

10 Stochastic Integral 69 10.1 Diffusion Process . . . 69

10.2 Information . . . 71

10.3 Definition of Stochastic Integral . . . 72

10.4 Martingale. . . 74

10.5 Calculus of Stochastic Integral . . . 74

11 Examples of Stochastic Integral 78 11.1 Evaluation ofE[W(t)4] . . . . 78

11.2 Evaluation ofE[eαW(t)] . . . 80

12 Differential Equations 81 12.1 Ordinary Differential Equation . . . 81

12.2 Geometric Brownian Motion . . . 82

12.3 Stochastic Process and Partial Differential Equation . . . 83

13 Portfolio Dynamics 85 13.1 Portfolio Model . . . 85

13.2 Rate of Return of Stock and Risk-free Asset . . . 87

(5)

CONTENTS 5

14 Pricing via Arbitrage 89

(6)

Bibliography

T. Bjork. Arbitrage Theory in Continuous Time. Oxford Finance. Oxford Univ Pr, 2nd edition, 2004.

R. Durrett. Probability: Theory and Examples. Thomson Learning, 1991.

M. El-Taha and J. S. Stidham. Sample-path analysis of queueing systems. Kluwer’s international press, 1999.

P. Glynn, B. Melamed, and W. Whitt. Estimating customer and time averages,

1993. URLciteseer.nj.nec.com/glynn93estimating.html.

A. Langville and C. Meyer. Google’s PageRank and Beyond: The Science of Search Engine Rankings. Princeton University Press, 2006.

B. Melamed and W. Whitt. On arrival tha see time averages: a martingale ap-proach. J. of Applied Probability, 27:376 – 384, 1990.

B. Melamed and D. Yao. The asta property, 1995. URLciteseer.nj.nec. com/melamed95asta.html.

B. Oksendal. Stochastic Differential Equations: An Introduction With Applica-tions. Springer-Verlag, 2003.

R. K. P. Bremaud and R. Mazumdar. Event and time averages: a review. Adv. Appl. Prob., 24:377 – 411, 1992.

S. M. Ross. Applied Probability Models With Optimization Applications. Dover Pubns, 1992.

N. N. Taleb. Fooled by Randomness: The Hidden Role of Chance in the Markets and in Life. Random House Trade Paperbacks, 2005.

(7)

BIBLIOGRAPHY 7

H. Toyoizumi. Applied probability and mathematical finance theory.

(8)

Chapter 1

Basics in Probability Theory

1.1

Why Probability?

Example 1.1. Here’s examples where we use probability:

• Lottery.

• Weathers forecast.

• Gamble.

• Baseball,

• Life insurance.

• Finance.

Problem 1.1. Name a couple of other examples you could use probability theory.

Since our intuition sometimes leads us mistake in those random phenomena, we need to handle them using extreme care in rigorous mathematical framework, called probability theory. (See Exercise1.1).

1.2

Probability Space

Be patient to learn the basic terminology in probability theory. To determine the probabilistic structure, we need a probability space, which is consisted by a sam-ple space, a probability measure and a family of (good) set of events.

(9)

1.2. PROBABILITY SPACE 9

Definition 1.1 (Sample Space). The set of all events is called sample space, and we write it asΩ. Each elementω Ωis called an event.

Example 1.2(Lottery). Here’s an example of Lottery.

• The sample spaceΩis{first prize, second prize,..., lose}.

• An eventω can be first prize, second prize,..., lose, and so on.

Sometimes, it is easy to use sets of events in sample spaceΩ.

Example 1.3(Sets in Lottery). The following is an example inΩof Example1.2.

W ={win}={first prize, second prize,..., sixth prize} (1.1)

L={lose} (1.2)

Thus, we can say that “what is the probability of win?”, instead of saying “what is the probability that we have either first prize, second prize,..., or sixth prize?”.

Example 1.4 (Coin tosses). Let us consider tossing coins 10 times. Then, by writing ”up =1” and ”down = 0”, the corresponding sample spaceΩis

Ω={ω = (x1,x2, . . . ,x10):xi=0 or 1}. (1.3)

Problem 1.2. Find the setS={the number of up is 5}in Example1.4.

Definition 1.1 (Probability measure). The probability ofA, P(A), is defined for each set of the sample spaceΩ, if the followings are satisfyed:

1. 0P(A)1 for allAΩ.

2. P(Ω) =1.

3. For any sequence of mutually exclusiveA1,A2...

P(

[

i=1

Ai) =

i=1

P(Ai). (1.4)

(10)

10 CHAPTER 1. BASICS IN PROBABILITY THEORY

Mathematically, all function f which satisfies Definition 1.1can regarded as probability. In other words, we need to be careful to select which function is suitable for probability.

Example 1.5(Probability Measures in Lottery). Suppose we have a lottery such as 10 first prizes, 20 second prizes ··· 60 sixth prizes out of total 1000 tickets, then we have a probability measurePdefined by

P(n) =P(winn-th prize) = n

100 (1.5)

P(0) =P(lose) = 79

100. (1.6)

It is easy to see thatPsatisfies Definition1.1. According to the definitionP, we can calculate the probability on a set of events:

P(W) =the probability of win

=P(1) +P(2) +···+P(6)

= 21

100.

Of course, you can cheat your customer by saying you have 100 first prizes instead of 10 first prizes. Then your customer might have a differentPsatisfying Definition 1.1. Thus it is pretty important to select an appropriate probability measure. Selecting the probability measure is a bridge between physical world and mathematical world. Don’t use wrong bridge!

Problem 1.3. In Example 1.4, find the appropriate probability measureP onΩ. For example, calculate

P(S), (1.7)

whereS={the number of up is 5}.

(11)

1.3. CONDITIONAL PROBABILITY AND INDEPENDENCE 11

1.3

Conditional Probability and Independence

Now we introduce one of the most uselful and probably most difficult concepts of probability theory.

Definition 1.2(Conditional Probability). Define the probability of B given A by

P(B|A) = P(B&A)

P(A) =

P(BA)

P(A) . (1.8)

We can use the conditional probability to calculate complex probability. It is actually the only tool we can rely on. Be sure that the conditional probability P(B|A)is different with the regular probabilityP(B).

Example 1.6 (Lottery). Let W ={win} and F ={first prize} in Example 1.5. Then we have the conditional probability that

P(F|W) =the probability of winning 1st prize given you win the lottery

= P(FW)

P(W) =

P(F)

P(W)

= 10/1000

210/1000 = 1 21

6

= 10

1000 =P(F).

Remark 1.2. Sometimes, we may regard Definition 1.2 as a theorem and call Bayse rule. But here we use this as a definition of conditional probability.

Problem 1.4(False positives1). Answer the followings:

1. Suppose there are illegal acts in one in 10000 companies on the average. You as a accountant audit companies. The auditing contains some uncer-tainty. There is a 1% chance that a normal company is declared to have some problem. Find the probability that the company declared to have a problem is actually illegal.

2. Suppose you are tested by a disease that strikes 1/1000 population. This test has 5% false positives, that mean even if you are not affected by this disease, you have 5% chance to be diagnosed to be suffered by it. A medical operation will cure the disease, but of course there is a mis-operation. Given that your result is positive, what can you say about your situation?

(12)

12 CHAPTER 1. BASICS IN PROBABILITY THEORY

Problem 1.5. In Example1.4, find the conditional probability

P(A|S), (1.9)

whereA={the first 4 tosses are all up}andS={the number of up is 5}.

Definition 1.3(Independence). Two sets of eventsAandBare said to be indepen-dent if

P(A&B) =P(AB) =P(A)P(B) (1.10)

Theorem 1.1(Conditional of Probability of Independent Events). Suppose A and B are independent, then the conditional probability of B given A is equal to the probability of B.

Proof. By Definition1.2, we have

P(B|A) = P(BA)

P(A) =

P(B)P(A)

P(A) =P(B),

where we usedAandBare independent.

Example 1.7(Independent two dices). Of course two dices are independent. So

P(The number on the first dice is even while the one on the second is odd)

=P(The number on the first dice is even)P(The number on the second dice is odd)

=1

2· 1 2.

Example 1.8 (More on two dice). Even though the two dices are independent, you can find dependent events. For example,

P(The first dice is bigger than second dice even while the one on the second is even) =?

How about the following?

P(The sum of two dice is even while the one on the second is odd) =?.

(13)

1.4. RANDOM VARIABLES 13

1.4

Random Variables

The name random variable has a strange and stochastic history2. Although its fragile history, the invention of random variable certainly contribute a lot to the probability theory.

Definition 1.4 (Random Variable). The random variable X = X(ω) is a real-valued function onΩ, whose value is assigned to each outcome of the experiment (event).

Remark 1.3. Note that probability and random variables is NOT same! Random variables are function of events while the probability is a number. To avoid the confusion, we usually use the capital letter to random variables.

Example 1.9(Lottery). A random variableX can be designed to formulate a lot-tery.

X =1, when we get the first prize.

X =2, when we get the second prize.

Example 1.10(Bernouilli random variable). LetX be a random variable with

X=

(

1 with probabilityp.

0 with probability 1p. (1.11)

for some p[0,1]. The random variable X is said to be a Bernouilli random variable. Coin toss is a typical example of Bernouilli random variable with p=

1/2.

Sometimes we use random variables to indicate the set of events. For example, instead of saying the set that we win first prize, we write as{ω Ω:X(ω) =1}, or simply{X =1}.

Definition 1.5(Probability distribution). The probability distribution functionF(x)

is defined by

F(x) =P{Xx}. (1.12)

(14)

14 CHAPTER 1. BASICS IN PROBABILITY THEORY

The probability distribution function fully-determines the probability structure of a random variableX. Sometimes, it is convenient to consider the probability density function instead of the probability distribution.

Definition 1.6 (probability density function). The probability density function f(t)is defined by

f(x) = dF(x)

dx =

dP{Xx}

dx . (1.13)

Sometimes we usedF(x) =dP{X x}=P(X (x,x+dx])even whenF(x)

has no derivative.

Lemma 1.1. For a (good) set A,

P{XA}= Z

A

dP{Xx}= Z

A

f(x)dx. (1.14)

Problem 1.6. Let X be an uniformly-distributed random variable on [100,200]. Then the distribution function is

F(x) =P{X x}=x−100

100 , (1.15)

forx[100,200].

• Draw the graph ofF(x).

• Find the probability function f(x).

1.5

Expectation, Variance and Standard Deviation

LetX be a random variable. Then, we have some basic tools to evaluate random variableX. First we have the most important measure, the expectation or mean of X.

Definition 1.7(Expectation).

E[X] = Z ∞

−∞xdP{Xx}=

Z ∞

(15)

1.5. EXPECTATION, VARIANCE AND STANDARD DEVIATION 15

Remark1.4. For a discrete random variable, we can rewrite (1.16) as

E[X] =

n

xnP[X =xn]. (1.17)

Lemma 1.2. Let(Xn)n=1,...,N be the sequence of possibly correlated random

vari-ables. Then we can change the order of summation and the expectation.

E[X1+···+XN] =E[X1] +···+E[XN] (1.18)

Proof. See Exercise1.6.

E[X]gives you the expected value ofX, butX is fluctuated aroundE[X]. So we need to measure the strength of this stochastic fluctuation. The natural choice may be XE[X]. Unfortunately, the expectation ofXE[X] is always equal to zero (why?). Thus, we need the variance ofX, which is indeed the second moment aroundE[X].

Definition 1.8(Variance).

Var[X] =E[(XE[X])2]. (1.19)

Lemma 1.3. We have an alternative to calculate Var[X],

Var[X] =E[X2]E[X]2. (1.20)

Proof. See Exercise1.6.

Unfortunately, the varianceVar[X]has the dimension ofX2. So, in some cases, it is inappropriate to use the variance. Thus, we need the standard deviationσ[X]

which has the order ofX.

Definition 1.9(Standard deviation).

σ[X] = (Var[X])1/2. (1.21)

Example 1.11 (Bernouilli random variable). LetX be a Bernouilli random vari-able withP[X =1] =pandP[X=0] =1p. Then we have

E[X] =1p+0(1p) = p. (1.22)

Var[X] =E[X2]E[X]2=E[X]E[X]2=p(1p), (1.23)

(16)

16 CHAPTER 1. BASICS IN PROBABILITY THEORY

In many cases, we need to deal with two or more random variables. When these random variables are independent, we are very lucky and we can get many useful result. Otherwise...

Definition 1.2. We say that two random variablesX andY are independent when the sets {X x} and {Y y} are independent for all x and y. In other words, whenX andY are independent,

P(X x,Y y) =P(X x)P(Y y) (1.24) Lemma 1.4. For any pair of independent random variables X and Y , we have

E[XY] =E[X]E[Y].

Var[X+Y] =Var[X] +Var[Y].

Proof. Extending the definition of the expectation, we have a double integral,

E[XY] = Z

xydP(Xx,Y y).

SinceX andY are independent, we have P(X x,Y y) =P(X x)P(Y y). Thus,

E[XY] = Z

xydP(Xx)dP(Y y)

= Z

xdP(Xx) Z

ydP(X y)

=E[X]E[Y].

Using the first part, it is easy to check the second part (see Exercise1.9.) Example 1.12(Binomial random variable). LetX be a random variable with

X = n

i=1

Xi, (1.25)

whereXiare independent Bernouilli random variables with the mean p. The

ran-dom variableX is said to be a Binomial random variable. The mean and variance ofX can be obtained easily by using Lemma1.4as

E[X] =np, (1.26)

Var[X] =np(1p). (1.27)

Problem 1.7. LetX be the number of up’s in 10 tosses. 1. FindE[X]andVar[X]using1.12.

(17)

1.6. HOW TO MAKE A RANDOM VARIABLE 17

1.6

How to Make a Random Variable

Suppose we would like to simulate a random variable X which has a distribution F(x). The following theorem will help us.

Theorem 1.2. Let U be a random variable which has a uniform distribution on

[0,1], i.e

P[U u] =u. (1.28)

Then, the random variable X =F−1(U)has the distribution F(x). Proof.

P[Xx] =P[F−1(U)x] =P[UF(x)] =F(x). (1.29)

1.7

News-vendor Problem, “How many should you

buy?”

Suppose you are assigned to sell newspapers. Every morning you buy inx newspa-pers at the pricea. You can sell the newspaper at the pricea+bto your customers. You should decide the numberxof newspapers to buy in. If the number of those who buy newspaper is less than x, you will be left with piles of unsold newspa-pers. When there are more buyers thanx, you lost the opportunity of selling more newspapers. Thus, there seems to be an optimalxto maximize your profit.

Let X be the demand of newspapers, which is not known when you buy in newspapers. Suppose you buy x newspapers and check if it is profitable when you buy the additional ∆xnewspapers. If the demand X is larger thanx+∆x, the additional newspapers will pay off and you getb∆x, but ifX is smaller thanx+∆x, you will losea∆x. Thus, the expected additional profit is

E[profit from additional∆xnewspapers] =b∆xP{X x+∆x} −a∆xP{Xx+∆x}

=bx(a+b)∆xP{X x+∆x}.

Whenever this is positive, you should increase the stock, thus the optimum stock xshould satisfy the equilibrium equation;

P{X x+∆x}= b

(18)

18 CHAPTER 1. BASICS IN PROBABILITY THEORY

for all∆x>0. Letting∆x0, we have

P{Xx}= b

a+b, (1.31)

Using the distribution functionF(x) =P{Xx}and its inverseF−1, we have

x=F−1

b a+b

. (1.32)

Using thisx, we can maximize the profit of news-vendors.

Problem 1.8. Suppose you are a newspaper vender. You buy a newspaper at the price of 70 and sell it at 100. The demandX has the following uniform distribu-tion,

P{X x}=x−100

100 , (1.33)

forx[100,200]. Find the optimal stock for you.

1.8

Covariance and Correlation

When we have two or more random variables, it is natural to consider the relation of these random variables. But how? The answer is the following:

Definition 1.10 (Covariance). Let X and Y be two (possibly not independent) random variables. Define the covariance ofX andY by

Cov[X,Y] =E[(XE[X])(YE[Y])]. (1.34)

Thus, the covariance measures the multiplication of the fluctuations around their mean. If the fluctuations are tends to be the same direction, we have larger covariance.

Example 1.13(The covariance of a pair of indepnedent random variables). Let X1andX2be the independent random variables. The covariance ofX1andX2is

(19)

1.9. VALUE AT RISK 19

since X1 and X2 are independent. Thus, more generally, if the two random

vari-ables are independent, their covariance is zero. (The converse is not always true. Give some example!)

Now, letY =X1+X2. How about the covariance ofX1andY?

Cov[X1,Y] =E[X1Y]−E[X1]E[Y]

=E[X1(X1+X2)]−E[X1]E[X1+X2]

=E[X12]E[X1]2

=Var[X1] =np(1−p)>0.

Thus, the covariance ofX1andY is positive as can be expected.

It is easy to see that we have

Cov[X,Y] =E[XY]E[X]E[Y], (1.35) which is sometimes useful for calculation. Unfortunately, the covariance has the order of XY, which is not convenience to compare the strength among different pair of random variables. Don’t worry, we have the correlation function, which is normalized by standard deviations.

Definition 1.11 (Correlation). Let X and Y be two (possibly not independent) random variables. Define the correlation ofX andY by

ρ[X,Y] =Cov[X,Y]

σ[X]σ[Y]. (1.36)

Lemma 1.5. For any pair of random variables, we have

−1ρ[X,Y]1. (1.37)

Proof. See Exercise1.11

1.9

Value at Risk

Suppose we have one stock with its current value x0. The value of the stock

fluctuates. LetX1be the value of this stock tomorrow. The rate of returnRcan be

defined by

R= X1−x0

x0

. (1.38)

(20)

20 CHAPTER 1. BASICS IN PROBABILITY THEORY

Problem 1.9. Why did the rate of returnRassume to be a normal random variable, instead of the stock priceX1itself.

We need to evaluate the uncertain risk of this future stock.

Definition 1.12(Value at Risk). The future risk of a property can be evaluated by Value at Risk (VaR)zα, the decrease of the value of the property in the worst case

which has the probabilityα, or

P{X1−x0≥ −zα}=α, (1.39)

or

P{zα ≥x0−X1}=α. (1.40)

In short, our damage is limited tozα with the probabilityα.

Figure 1.1: VaR: adopted form http://www.nomura.co.jp/terms/english/v/var.html

By the definition of rate of return (1.38), we have

P

R≥ −zα x0

=α, (1.41)

or

P

R≤ −zα x0

(21)

1.9. VALUE AT RISK 21

SinceRis assumed to be normal random variable, using the fact that

Z= R−µ

σ , (1.43)

is a standard normal random variable, where µ =E[R], and σ =p

Var[R], we have

1α=P

R≤ −zα x0

=P

Zzα/x0−µ

σ

. (1.44)

Since the distribution function of standard normal random variable is symmetric, we have

α =P

Z zα/x0+µ

σ

(1.45)

Setxα as

α =P{Zxα}, (1.46)

or

xα =F−1(α), (1.47)

which can be found in any standard statistics text book. From (1.45) we have

zα/x0+µ

σ =xα, (1.48)

or

zα =x0(F−1(α)σ−µ). (1.49)

Now consider the case when we havenstocks on our portfolio. Each stocks have the rate of return at one day as,

(R1,R2, . . . ,Rn). (1.50)

Thus, the return rate of our portfolioRis estimated by,

R=c1R1+c2R2+···+cnRn, (1.51)

(22)

22 CHAPTER 1. BASICS IN PROBABILITY THEORY

Let q0 be the value of the portfolio today, and Q1 be the one for tomorrow.

The value at risk (VaR)Zα of our portfolio is given by

P{Q1−q0≥ −zα}=α. (1.52)

We need to evaluate E[R] and Var[R]. It is tempting to assume that R is a normal random variable with

µ=E[R] = n

i=1

E[Ri], (1.53)

σ2=Var[R] = n

i=1

Var[Ri]. (1.54)

This is true ifR1, . . . ,Rnare independent. Generally, there may be some

correla-tion among the stocks in portfolio. If we neglect it, it would cause underestimate of the risk.

We assume the vectore

(R1,R2, . . . ,Rn), (1.55)

is the multivariate normal random variable, and the estimated rate of return of our portfolioRturns out to be a normal random variable again[Toyoizumi,2008, p.7].

Problem 1.10. EstimateVar[R]when we have only two different stocks, i.e.,R=

R1+R2, usingρ[R1,R2]defined in (1.36).

Usingµ andσ of the overall rate of returnR, we can evaluate the VaRZα just

like (1.49).

1.10

References

(23)

1.11. EXERCISES 23

1.11

Exercises

Exercise 1.1. Find an example that our intuition leads to mistake in random phe-nomena.

Exercise 1.2. Define a probability space according to the following steps.

1. Take one random phenomena, and describe its sample space, events and probability measure

2. Define a random variable of above phenomena

3. Derive the probability function and the probability density.

4. Give a couple of examples of set of events.

Exercise 1.3. Explain the meaning of (1.4) using Example1.2

Exercise 1.4. CheckPdefined in Example1.5satisfies Definition1.1.

Exercise 1.5. Calculate the both side of Example1.8. Check that these events are dependent and explain why.

Exercise 1.6. Prove Lemma1.2and1.3using Definition1.7.

Exercise 1.7. Prove Lemma1.4.

Exercise 1.8. LetX be the Bernouilli random variable with its parameter p. Draw the graph ofE[X],Var[X],σ[X]against p. How can you evaluateX?

Exercise 1.9. Prove Var[X+Y] =Var[X] +Var[Y] for any pair of independent random variablesX andY.

Exercise 1.10(Binomial random variable). LetX be a random variable with

X = n

i=1

Xi, (1.56)

whereXi are independent Bernouilli random variables with the mean p. The

ran-dom variable X is said to be a Binomial random variable. Find the mean and variance ofX.

Exercise 1.11. Prove for any pair of random variables, we have

(24)

Chapter 2

Markov chain

As we saw in Chapter1, the concepts like probability spaces and random variables are quite useful. However, they are not enough. If you need to model physical phenomena, you soon realize the concept of time in your model, but a random variable will not good for modeling time-evolution of stochastic systems.

Markov chain is a good answer to such demand!

2.1

Series of Random Variables

Wait a minute, you may use a simple series of random variables instead of Markov chain...

LetX1,X2,X3, . . . be a sequence of independent and identically distributed

ran-dom variables. This is the most simple model introducing ”time” in stochastic systems.

Example 2.1. LetXibe the result of i-th coin toss. Then, the resulted sequence

X1,X2,X3, . . . represents the coin toss evolution.

Example 2.2. Consider up and down of a stock price. Let Xi be the 1 if the

stock price went up, 0 if it went down. Then, the resulted sequenceX1,X2,X3, . . .

represents the stock price dynamics.

2.2

Discrete-time Markov chain

Markov chain is one of the most basic tools to investigate dynamical features of stochastic phenomena. Roughly speaking, Markov chain is used for any stochastic

(25)

2.2. DISCRETE-TIME MARKOV CHAIN 25

processes for first-order approximation.

Definition 2.1 (Rough definition of Markov Process). A stochastic process X(t)

is said to be a Markov chain, when the future dynamics depends probabilistically only on the current state (or position), not depend on the past trajectory.

Example 2.3. The followings are examples of Markov chains. • Stock price

• Brownian motion • Queue length at ATM

• Stock quantity in storage • Genes in genome

• Population

• Traffic on the internet

Definition 2.1 (discrete-time Markov chain). (Xn) is said to be a discrete-time Markov chain if

• state space is at most countable,

• state transition is only at discrete instance,

and the dynamics is probabilistically depend only on its current position, i.e., P[Xn+1=xn+1|Xn=xn, ...,X1=x1] =P[Xn+1=xn+1|Xn=xn]. (2.1) Definition 2.2((time-homogenous) 1-step transition probability).

pi jP[Xn+1= j|Xn=i]. (2.2)

The probability that the next state is j, assuming the current state is ini. Similarly, we can define them-step transition probability:

pmi jP[Xn+m= j|Xn=i]. (2.3)

We can always calculatem-step transition probability by pmi j =

k

pmik−1pk j. (2.4)

Problem 2.1. Consider coin tosses. LetXnbe the number of up’s up ton-th trial.

(26)

26 CHAPTER 2. MARKOV CHAIN

2.3

Time Evolution of Markov Chain

Letπi(n)be the probability that the state at timenisi, i.e.,

πi(n) =P{Xn=i}. (2.5)

Theorem 2.1(Time Evolution of Markov Chain). Time evolution of Markov chain can be described by using its transition probabilities:

πj(n) =

i

πi(n1)pi j. (2.6)

Problem 2.2. Use the definition of conditional probability to show Theorem2.1.

2.4

Stationary State

The initial distribution: the probability distribution of the initial state. The initial state can be decided on the consequence of tossing a dice...

Definition 2.3(Stationary distribution). The probability distributionπj is said to

be a stationary distribution, when the future state probability distribution is also

πjif the initial distribution isπj.

P{X0= j}=πj=⇒P{Xn= j}=πj (2.7)

In Markov chain analysis, to find the stationary distribution is quite important. If we find the stationary distribution, we almost finish the analysis.

(27)

2.5. MATRIX REPRESENTATION 27

2.5

Matrix Representation

Definition 2.4(Transition (Probabitliy) Matrix).

P

   

p11 p12 . . . p1m

p21 p22 . . . p2m

..

. ... . .. ... pm1 pm2 . . . pmm

   

(2.8)

The matrix of the probability that a stateito another state j.

Definition 2.5(State probability vector at timen).

π(n)(π1(n),π2(n), ...) (2.9)

πi(n) =P[Xn=i]is the probability that the state isiat timen.

Theorem 2.2(Time Evolution of Markov Chain).

π(n) =π(0)Pn (2.10)

Given the initial distribution, we can always find the probability distribution at any time in the future.

Problem 2.3. Suppose we have a transition matrixPsuch as

P=

1/3 2/3 3/4 1/4 .

(2.11)

Given the initial stateX0=1, find the probability distribution ofX1.

Theorem 2.3 (Stationary Distribution of Markov Chain). When a Markov chain has the stationary distributionπ, then

π=πP (2.12)

(28)

28 CHAPTER 2. MARKOV CHAIN

2.6

Stock Price Dynamics Evaluation Based on Markov

Chain

Suppose the up and down of a stock price can be modeled by a Markov chain. There are three possibilities: (1) up, (2) down and (3) hold. The price fluctuation tomorrow depends on the todayfs movement. Assume the following transition probabilities:

P=

0 3/4 1/4 1/4 0 3/4 1/4 1/4 1/2

 (2.13)

For example, if today’s movement is “up”, then the probability of “down” again tomorrow is 3/4.

Problem 2.5. 1. Find the steady state distribution.

2. Is it good idea to hold this stock in the long run?

Solutions:

1.

π =πP (2.14)

123) = (π123)

0 3/4 1/4 1/4 0 3/4 1/4 1/4 1/2

 (2.15)

Using the nature of probability (π1+π2+π3=1), (2.15) can be solved and

(π1,π2,π3) = (1/5,7/25,13/25). (2.16)

(29)

2.7. GOOGLE’S PAGERANK AND MARKOV CHAIN 29

2.7

Google’s PageRank and Markov Chain

Google uses an innovative concept called PageRank1 to quantify the importance of web pages. PageRank can be understood by Markov chain. Let us take a look at a simple example based on [Langville and Meyer,2006, Chapter 4].

Suppose we have 6 web pages on the internet2. Each web page has some links to other pages as shown in Figure 2.1. For example the web page indexed by 1 refers 2 and 3, and is referred back by 3. Using these link information, Google

Figure 2.1: Web pages and their links. Adopted from [Langville and Meyer,2006, p.32]

decide the importance of web pages. Here’s how.

Assume you are reading a web page 3. The web page contains 3 links to other web pages. You will jump to one of the other pages by pure chance. That means your next page may be 2 with probability 1/3. You may hop the web pages according to the above rule, or transition probability. Now your hop is governed

1PageRank is actually Page’s rank, not the rank of pages, as written in http://www.google.co.jp/intl/ja/press/funfacts.html. “The basis of Google’s search technol-ogy is called PageRank, and assigns an ”importance” value to each page on the web and gives it a rank to determine how useful it is. However, that’s not why it’s called PageRank. It’s actually named after Google co-founder Larry Page.”

(30)

30 CHAPTER 2. MARKOV CHAIN

by Markov chain, you can the state transition probabilityPas

P= (pi j) =

       

0 1/2 1/2 0 0 0

0 0 0 0 0 0

1/3 1/3 0 0 1/3 0

0 0 0 0 1/2 1/2

0 0 0 1/2 0 1/2

0 0 0 1 0 0

        , (2.17) where

pi j =P{Next click is web page j|reading pagei} (2.18)

=

( 1

the number of links outgoing from the pagei ihas a link to j,

0 otherwise. (2.19)

Starting from web page 3, you hop around our web universe, and eventually you may reach the steady state. The page rank of web pageiis nothing but the steady state probability that you are reading pagei. The web page where you stay the most likely gets the highest PageRank.

Problem 2.6. Are there any flaws in this scheme? What will happen in the long-run?

When you happened to visit a web page with no outgoing link, you may jump to a web page completely unrelated to the current web page. In this case, your next page is purely randomly decided. For example, the web page 2 has no outgoing link. If you are in 2, then the next stop will be randomly selected.

Further, even though the current web page has some outgoing link, you may go to a web page completely unrelated. We should take into account such behavior. With probability 1/10, you will jump to a random page, regardless of the page link.

Thus the transition probability is modified, and whenihas at least one outgo-ing link,

pi j=

9 10

1

the number of links outgoing from the pagei+ 1 10

1

6. (2.20)

On the other hand, whenihas no outgoing link, we have

pi j =

1

(31)

2.7. GOOGLE’S PAGERANK AND MARKOV CHAIN 31

The new transition probability matrixPis

P= 1

60

      

1 28 28 1 1 1

10 10 10 10 10 10

19 19 1 1 19 1

1 1 1 1 28 28

1 1 1 28 1 28

1 1 1 55 1 1

      

. (2.22)

Problem 2.7. Answer the followings:

1. Verify (2.22).

2. Computeπ(1)using

π(1) =π(0)P, (2.23)

given that initially you are in the web page 3.

By Theorem2.3, we can find the stationary probabilityπsatisfying

π=πP. (2.24)

It turns out to be

π = (0.0372,0.0539,0.0415,0.375,0.205,0.286), (2.25)

(32)

32 CHAPTER 2. MARKOV CHAIN

(33)

Chapter 3

Birth and Death process and Poisson

Process

As we saw in Chapter2, we can analyze complicated system using Markov chains. Essentially, Markov chains can be analyzed by solving a matrix equation. How-ever, instead of solving matrix equations, we may find a fruitful analytical result, by using a simple variant of Markov chains.

3.1

Definition of Birth and Death Process

Birth and death process is a special continuous-time Markov chain. The very basic of the standard queueing theory. The process allows two kinds of state transitions:

• {X(t) = j j+1}:birth

• {X(t) = j j1}:death

Moreover, the process allows no twin, no death at the same instance. Thus, for example,

P{a birth and a death att}=0. (3.1)

Definition 3.1 (Birth and Death process). DefineX(t)be a Birth and Death pro-cess with its transition rate;

P[X(t+∆t) = j+1|X(t) = j] =λj∆t+o(∆t)

P[X(t+∆t) = j1|X(t) = j] =µj∆t+o(∆t).

(34)

34 CHAPTER 3. BIRTH AND DEATH PROCESS AND POISSON PROCESS

3.2

Differential Equations of Birth and Death

Pro-cess

The dynamics of birth and death process is described by a system of differential equations. Using Markov property, for a sufficiently small∆t, we have

Pj(t+∆t) =Pj(t){1jj)∆t}+Pj−1(tj∆t+Pj+1(tj+1∆t+o(∆t).

(3.2) Dividing∆t in the both side and letting∆t 0, we have

d

dtPj(t) =λjPj−1(t)−(λjj)Pj(t) +Pj+1(tj+1, (3.3) for j1. For j=0, we have

d

dtP0(t) =−(λ0)P0(t) +P1(t)µ1. (3.4) Problem 3.1. Can you solve this system of differential equations? If not, what kind of data do you need?

3.3

Infinitesimal Generator

Unlike the case of discrete-time, we need the transition rateqi j for

continuous-time Markov chains. However, It is also much convenient to use matrix form. Letqi j be

qj j+1= lim

t→0

1

∆tP{X(t+∆t) = j+1|X(t) = j}=λj, (3.5)

qj j−1= lim

t→0

1

tP{X(t+∆t) = j−1|X(t) = j}=µj (3.6) A birth and death process is described by its infinitesimal generatorQas

Q=      

−λ0 λ0

µ1 −(λ1+µ1) λ1

µ2 −(λ2+µ2) λ2

. . . .       (3.7)

(35)

3.4. SYSTEM DYNAMICS 35

3.4

System Dynamics

Definition 3.2(State probability). The state of the system at timetcan be defined by the infinite-dimension vector:

P(t) = (P0(t),P1(t),···), (3.8)

wherePk(t) =P[X(t) =k].

The dynamics of the state is described by the differential equation of matrix:

dP(t)

dt =P(t)Q. (3.9)

Formally, the differential equation can be solved by

P(t) =P(0)eQt, (3.10)

whereeQt is matrix exponential defined by

eQt =

n=0

(Qt)n

n! . (3.11)

Remark3.1. It is hard to solve the system equation, since it is indeed an infinite-dimension equation. (If you are brave to solve it, please let me know!)

3.5

Poisson Process

Definition 3.3 (Poisson Process). Poisson process is a specail birth and death process, which has the following parameters:

• µk=0 : No death

• λk=λ: Constant birth rate

Then, the corresponding system equation is,

dPk(t)

dt =−λPk(t) +λPk−1(t)fork≥1 : internal states (3.12) dP0(t)

(36)

36 CHAPTER 3. BIRTH AND DEATH PROCESS AND POISSON PROCESS

Also, the initial condition,

Pk(0) =

(

1 k=0

0 otherwise, (3.14)

which means no population initially.

Now we can solve the equation by iteration.

P0(t) =e−λt (3.15)

P1(t) =λte−λt (3.16)

···

Pk(t) = (λt)k

k! e −λt,

(3.17)

which is Poisson distribution!

Problem 3.2. Show that (3.17) satisfies (3.12).

Theorem 3.1. The popution at time t of a constant rate pure birth process has Poisson distribution.

3.6

Why we use Poisson process?

• IT is Poisson process!

• It is EASY to use Poisson process!

Theorem 3.2(The law of Poisson small number). Poisson process Counting process of the number of independent rare events.

If we have many users who use one common system but not so often, then the input to the system can be Poisson process.

Let us summarize Poisson process as an input to the system:

1. A(t)is the number of arrival during[0,t).

2. The probability that the number of arrival in[0,t)iskis given by

P[A(t) =k] =Pk(t) =(λt) k

(37)

3.7. Z-TRNASFORM OF POISSON PROCESS 37

3. The mean number of arrival in[0,t)is given by

E[A(t)] =λt, (3.18)

whereλ is the arrival rate.

3.7

Z-trnasform of Poisson process

Z-transform is a very useful tool to investigate stochastic processes. Here’s some examples for Poisson process.

E[zA(t)] =e−λttz (3.19)

E[A(t)] = d

dzE[z

A(t)]

|z=1=λ (3.20)

Var[A(t)] =Find it! (3.21)

3.8

Independent Increment

Theorem 3.3(Independent Increment).

P[A(t) =k|A(s) =m] =Pkm(ts)

The arrivals after time s is independent of the past.

3.9

Interearrival Time

LetT be the interarrival time of Poisson process, then

P[T t] =1P0(t) =1−e−λt: Exponential distribution. (3.22)

3.10

Memory-less property of Poisson process

Theorem 3.4(Memory-less property). T: exponential distribution

(38)

38 CHAPTER 3. BIRTH AND DEATH PROCESS AND POISSON PROCESS

Proof.

P[T t+s|T >t] = P[t <Tt+s]

P[T >t] =1−e

−λs (3.24)

3.11

PASTA: Poisson Arrival See Time Average

Poisson Arrival will see the time average of the system. This is quite important for performance evaluation.

3.12

Excercise

1. Find an example of Markov chains which do not have the stationary distri-bution.

2. In the setting of section2.6, answer the followings:

(a) Prove (2.16)

(b) When you are sure that your friend was in Aizu-wakamatsu initially on Monday, where do you have to go on the following Wednesday, to join her? Describe why.

(c) When you do not know when and where she starts, which place do you have to go to join her? Describe why.

3. ShowE[A(t)] =λt, whenA(t)is Poisson process, i.e.,

P[A(t) =k] =(λt) k

k! e −λt

.

4. WhenA(t)is Poisson process, calculateVar[A(t)], using z-transform.

5. Make the graph of Poisson process and exponential distribution, using Math-ematica.

(39)

3.12. EXCERCISE 39

(a) What is the mean and variance ofT?

(b) What is the Laplace transform ofT? (E[esT])

(40)

Chapter 4

Introduction of Queueing Systems

4.1

Foundation of Performance Evaluation

Queueing system is the key mathematical concept for evaluation of systems. The features for the queueing systems:

1. Public: many customers share the system

2. Limitation of resources: There are not enough resources if all the customers try to use them simultaneously.

3. Random: Uncertainty on the Customer’s behaviors

Many customers use the limited amount of resources at the same time with random environment. Thus, we need to estimate the performance to balance the quality of service and the resource.

Example 4.1. Here are some example of queueing systems.

• manufacturing system.

• Casher at a Starbucks coffee shop.

• Machines in Lab room.

• Internet.

(41)

4.2. STARBUCKS 41

4.2

Starbucks

Suppose you are a manager of starbucks coffee shop. You need to estimate the customer satisfaction of your shop. How can you observe the customer satisfac-tion? How can you improve the performance effectively and efficiently?

Problem 4.1. By the way, most Starbucks coffee shops have a pick-up station as well as casher. Can you give some comment about the performance of the system?

4.3

Specification of queueing systems

System description requirements of queueing systems: • Arrival process(or input)

• Service time

• the number of server • service order

LetCnbe then-th customer to the system. Assume the customerCnarrives to

the system at time Tn. Let Xn=Tn+1−Tn be then-th interarrival time. Suppose

the customerCnrequires to the amount of timeSnto finish its service. We callSn

the service time ofCn.

We assume that bothXnandSnare random variable with distribution functions

P{Xnx}andP{Snx}.

Definition 4.1. Let us define some terminologies: • E[Xn] =λ1 : the mean interarrival time. • E[Sn] = µ1 : the mean service time.

• ρ=λµ: the mean utilizations. The ratio of the input vs the service. We often assumeρ<1,

for the stability.

LetWnbe the waiting time of then-th customer. Define the sojourn timeYnof

Cnby

Yn=Wn+Sn. (4.1)

(42)

42 CHAPTER 4. INTRODUCTION OF QUEUEING SYSTEMS

4.4

Little’s formula

One of the ”must” for performance evaluation. No assumption is needed to prove this!

Definition 4.1. Here’s some more definitions:

A(t): the number of arrivals in[0,t).

D(t): the number of departures in[0,t).

R(t): the sum of the time spent by customer arrived beforet.

N(t): the number of customers in the system at timet.

The relation between the mean sojourn time and the mean queue length.

Theorem 4.1(Little’s formula).

E[N(t)] =λE[Y].

Proof. Seeing Figure4.4, it is easy to find

A(t)

n=0

Yn= Z t

0

N(t)dt =R(t). (4.2)

Dividing both sides byA(t)and takingt∞, we have

E[Y(t)] =lim

t→∞

1 A(t)

A(t)

n=0

Yn=lim t→∞

t A(t)

1 t

Z t

0

N(t)dt= E[N(t)]

λ , (4.3)

sinceλ =limt→∞A(t)/t.

Example 4.2(Starbucks coffee shop). Estimate the sojourn time of customers,Y, at the service counter.

(43)

4.4. LITTLE’S FORMULA 43

t

A(t)

D(t)

(44)

44 CHAPTER 4. INTRODUCTION OF QUEUEING SYSTEMS

Then, we may find the average number of customer in the system,

E[N(t)] =3. (4.4)

Also, by the count of all order served, we can estimate the arrival rate of customer, say

λ =100. (4.5)

Thus, using Little’s formula, we have the mean sojourn time of customers in Star-bucks coffee shop,

E[Y] = E[N]

λ =0.03. (4.6)

Example 4.3(Excersize room). Estimate the number of students in the room.

E[Y] =1: the average time a student spent in the room (hour).

• λ =10: the average rate of incoming students (students/hour).

E[N(t)] =λE[Y] =10: the average number of students in the room.

Example 4.4(Toll gate). Estimate the time to pass the gate.

E[N(t)] =100: the average number of cars waiting (cars).

• λ =10: the average rate of incoming cars (students/hour).

E[Y] = Eλ[N]=10: the average time to pass the gate.

4.5

Lindley equation and Loynes variable

Here’s the ”Newton” equation for the queueing system.

Theorem 4.2 (Lindley Equation). For a one-server queue with First-in-first-out service discipline, the waiting time of customer can be obtained by the following iteration:

(45)

4.6. EXERCISE 45

The Lindley equation governs the dynamics of the queueing system. Although it is hard to believe, sometimes the following alternative is much easier to handle the waiting time.

Theorem 4.3(Loyne’s variable). Given that W1=0, the waiting time Wncan be

also expressed by

Wn= max j=0,1,...,n−1{

n1

i=j

(SiXi),0}. (4.8)

Proof. Use induction. It is clear thatW1=0. Assume the theorem holds forn−1.

Then, by the Lindley equation,

Wn+1=max(Wn+SnXn,0)

=max sup

j=1,...,n−1{

n−1

i=j

(SiXi),0}+SnXn,0

!

=max sup

j=1,...,n−1{

n

i=j

(SiXi),SnXn},0

!

= max

j=0,1,...,n{ n

i=j

(SiXi),0}.

4.6

Exercise

1. Restaurant Management

Your friend owns a restaurant. He wants to estimate how long each customer spent in his restaurant during lunch time, and asking you to cooperate him. (Note that the average sojourn time is one of the most important index for operating restaurants.) Your friend said he knows:

• The average number of customers in his restaurant is 10, • The average rate of incoming customers is 15 per hour.

(46)

46 CHAPTER 4. INTRODUCTION OF QUEUEING SYSTEMS

2. Modelling PC

Consider how you can model PCs as queueing system for estimating its per-formance. Describe what is corresponding to the following terminologies in queueing theory.

• customers • arrival • service

• the number of customers in the system

3. Web site administration

You are responsible to operate a big WWW site. A bender of PC-server proposes you two plans , which has the same cost. Which plan do you choose and describe the reason of your choice.

• Use 10 moderate-speed servers.

(47)

Chapter 5

PASTA

5.1

What will be seen by customers

In general, the system seen by customers at arrival is not always time average. LetLn be the number of customers in the system seen by the n-th arrival, and

L(t)be the number of customer in the system at timet. Since the means can be approximated by the sample average, we have

E[L(t)] 1

T

Z T

0

L(s)ds, (5.1)

and

E[Ln] 1

N

N

n=0

Ln, (5.2)

for largeT andN. However, in general,

E[Ln]6=E[L(t)], (5.3)

even for the stationary state of the system.

Example 5.1(Pair arrival). Assume a pair of customers (always two!) arrives to a system. In addition, the pairs arrive according to a Poisson Process. Then, the evenly-indexed customers will see always+1 customers in the system compared to oddly-indexed customers. Thus,

E[L2k] =E[L2k−1] +1=E[L(t)] +1, (5.4)

where we used ”PASTA” explained below in the second equation.

(48)

48 CHAPTER 5. PASTA

5.2

Stochastic Intensity

Let us define so-called stochastic intensity for stationary counting processes. Let Tn be the time ofn-th event andA(t)be its counting process, i.e. the number of

events in(0,t].

Definition 5.1 (Intensity of counting process). The intensity of the stationary counting processA(t)is defined by

λ =E[A(t+s)−A(t)]

s . (5.5)

Definition 5.2(The stochastic intensity of A(t)). The stochastic intensity of the counting processA(t)givenL(t)is defined by

λ(t) =lim

s→0

E[A(t+s)A(t)|L(t)]

s =slim→0

P[A(t+s)A(t) =1|L(t)]

s . (5.6)

The stochastic intensity λ(t) is a random variable representing the intensity conditioned by the state of system just before the timet.

Theorem 5.1(The relation between intensity and the stochastic intensity).

E[λ(t)] =λ. (5.7)

Proof.

E[λ(t)] =E

lim

s→0

E[A(t+s)A(t)|L(t)]

s

=lim

s→0E

E[A(t+s)A(t)|L(t)]

s

=lim

s→0

E[A(t+s)A(t)]

s

Example 5.2 (Stochastic intensity of the departure process of M/M/1 queue). Consider anM/M/1 queue with the arrival rate λ and the service rate µ. Then, the stochastic intensity for the departure processD(t)is

λ(t) =µ1{L(t)>0}. (5.8) Letρ=λ/µ. Since we know thatP[L(t)>0] =P[L(t)>0] =ρ, we have

(49)

5.3. LACK OF BIAS ASSUMPTION 49

5.3

Lack of Bias Assumption

Let us consider the difference between the number of customersLseen just before the eventTn.

Theorem 5.2(Cross covariance Equation). P. Bremaud and Mazumdar[1992]

E[L(Tn−)]−E[L(t)] =Cov

L(t),λ(t)

λ

. (5.9)

Proof. For a smalls

E[L(t)|A(t,t+s] =1] = E

L(t)1{A(t,t+s]=1} P[A(t,t+s] =1]

= E[L(t−)P[A(t,t+s] =1|L(t−)]]

P[A(t,t+s] =1] .

Lettings0 on both sides, we have

E[L(Tn−)] =

E[L(t)λ(t)]

λ , (5.10)

whereλ =E[λ(1)] =E[A(1)]is the intensity ofA(t). Thus, we have

E[L(Tn−)]−E[L(t)] =Cov

L(t),λ(t)

λ

. (5.11)

Thus, whenCov[L(0),λ(0)] =0, the event will see the time average. The conditionCov[L(0),λ(0)] =0 is known to be the lack of bias assump-tion (LBA), and is intensively studied to find the processes with LBA in the con-text of ASTA (arrivals see time-average)El-Taha and S. Stidham[1999];Melamed and Yao[1995];Glynn et al.[1993];Melamed and Whitt[1990];P. Bremaud and Mazumdar[1992].

5.4

Reverse Stochastic Intensity

(50)

50 CHAPTER 5. PASTA

Definition 5.3(Reversed Stochastic Intensity). Define the reversed stochastic in-tensityλD(t)ofD(t)by

λD(t) =lim

s→0

E[D(t)D(ts)|L(t+)]

s . (5.12)

Intuitively,λD(t)can be regarded as the stochastic intensity conditioned by the number of users left behind at the departure. Then by using (5.9) for this reversed process, we have

E[L(Dn+)]E[L(t)] =Cov

L(t+),λD(t)

λ

, (5.13)

sinceE[D(1)] =E[A(1)] =λ for stationary systems.

Remark 5.1. If the arrival process is simple and the sojourn time of each user is independent, thenE[L(Dn+)] =E[L(Tn−)], and we haveCov[L(0−),λ(0)] =

Cov[L(0+),λD(0)].

5.5

Poisson arrivals see time average

It is known that Poisson arrivals see time average. We call this property PASTA (not an Italian dish though)!

Theorem 5.3. Poisson arrivals see time average of the system.

Proof. Since Poisson process lose the memory of the past, we have

λ(t) =lim

s→0

E[A(t+s)A(t)|L(t)]

s

=lim

s→0

E[A(t+s)A(t)]

s

=λ,

which is constant. Thus,

Cov[L(0),λ(0)] =Cov[L(0),λ] =0.

Hence,E[L(Tn−)] =E[L(t)]

(51)

5.6. EXCERCISE 51

5.6

Excercise

1. For anM/M/2 queue,

(52)

Chapter 6

M

/

M

/

1

queue

The most important queueing process!

• Arrival: Poisson process

• Service: Exponetial distribution

• Server: One

• Waiting room (or buffer): Infinite

6.1

M

/

M

/

1

queue as a birth and death process

LetN(t)be the number of customer in the system (including the one being served). N(t)is the birth and death process with,

• λk

• µk

P(Exactly one cusmter arrives in[t,t+∆t]) =λ∆t (6.1) P(Given at least one customer in the system,

exactly one service completes in[t,t+∆t]) =µ∆t (6.2)

AnM/M/1 queue is the birth and death process with costant birth rate (arrival rate) and constant death rate (service rate).

(53)

6.2. UTILIZATION 53

Theorem 6.1(Steady state probability ofM/M/1 queue). Whenρ<1,

pk=P[N(t) =k] = (1−ρ)ρk, (6.3)

whereρ=λ/µ.

Proof. The balance equation:

λpk−1=µpk (6.4)

Solving this,

pk= λ

µpk−1=

λ

µ

k

p0. (6.5)

Then, by the fact∑pk=1, we have

pk= (1−ρ)ρk. (6.6)

It is easy to see,

E[N(t)] = ρ

(1ρ). (6.7)

6.2

Utilization

The quantityρ is said to be a utilization of the system. Sincep0is the probability of the system is empty,

ρ =1p0 (6.8)

is the probability that the em is under service.

Whenρ1, the system is unstable. Indeed, the number of customers in the system wil go to∞.

6.3

Waiting Time Estimation

Updating...

関連した話題 :