• 検索結果がありません。

the solution of the equation is given by Here t

N/A
N/A
Protected

Academic year: 2022

シェア "the solution of the equation is given by Here t"

Copied!
10
0
0

読み込み中.... (全文を見る)

全文

(1)

the solution of the equation is given by

Here tij is the (i, j)-element of T defined as

and P and R are non-singular matrices diagonalizing U and V respectively.

(b) Under the assumption (29), the sotution (30) is unique.

(c) Under the assumption

the matrix series

converges and is equal to the solution (30).

(d) When m= n and U = V' and W is positive definite, X of (33) with U = V' is positive definite under the assumption (32).

Proof. (a) Under the assumption (29), X of (30) is defined. Substituting this X into the left hand side of the equation (28), we have

Here Λ = PUP-1 and M = RVR-1, and Λ is a diagonal matrix with

(2)

showing that (30) is a solution for (28).

(b) Suppose that two solutions X1 and X2 are proposed. So, we have two equations X1-UX1V = W and X2-UX2V = W hold. Subtracting each other, we get (X1-X2)-U(X1-X2)V = 0 or

where Z = X1-X2. And (35) can be transformed into

If we let Q = [ qij] = P-1ZR or PQR-1 = Z, and make use of relations Λ = P-1UP and M = R-1VR, then the last equality of (36) is transformed into an equation for Q,

which is rewritten as

(3)

or

This gives a solution

under the assumption (29) 1≠λivj. So we have Z = PQR-1 = P 0 R-1 = 0.

This implies X1-X2 = 0, and therefore the uniqueness follows.

(c) If we substitute (33) into the left hand side of the equation (28), then provided that (33) converges, we have

So it is seen that (33) is a solution for (28). Making use of P and R men- tioned above, (33) is transformed into

In view of (31) T = PWR-1, (37) is rewritten,

(4)

and the series (38) is rewritten as

by letting Yk = ΛkTM k.

Now, if we consider the series

then the ij, the (i, j)-element of Y, is given as

which converges under the assumption (32) 1 > |λivj|, and the converg- ing value is

Therefore the series (39) converges under the assumption (32), and the converging value is

(5)

So it is seen that the converging value of (37) is P-1Y *R and the converg- ing value of (38) or (37) is equal to (30).

(d) The result (c) holds under the assumption (32) 1 >|λivj| for the case where m = n and U = V' and vi = λj, (i =1, ... , m). So, we have that the series (33)

with U = V', converges under the assumption (32).

If we consider the quadratic form ' X of X for a non-zero vector = [ 1, ... , m ], then

This series also converges under the assumption (32). Letting k = U k , we have from (40),

Having assumed that W is positive definite, the first term of the right hand of (41) is positive, and the other terms are at least non-negative.

So, the right hand side of (41) and 'X are positive. ■ 7 Consistent Multivariate Distribution

In this section we make use of the results in Section 6 to obtain mul- tivariate normal distributions consistent with given conditional normal distributions.

(6)

Lemma 13 Consider three random υectors X = (p × 1), Y = (q × 1) and Z = (r × 1). Letting their distributions be = N(Q , R) and = N(Sz, T), we have = N(QSz, R + QTQ' ).

Proof. The density function of the distributions are

where 1 = (2 )-p/2|R|-1/2 and 2 = (2 )-q/2|T|-1/2. Then, we have

where

Integrating this with respect to , we obtain

(7)

In view of Lemma 9 (e), we get

After rearrangement, we obtain

where

Furthermore, letting = R + QTQ', is also positive definite given R and T being positive definite. In view of Lemma 9 (b),

In view of Lemma 9 (a),

and

(8)

we have

which is the density function of N(QSz, R + QTQ' ). ■

Lemma 14 Letting the conditional distribution of X = ( p × 1 ) be

= N(Qw, R) and the marginal distributin of W = ( q × 1 ) = N(0, T ), we have ( ) = N(0, R + QTQ' ).

Proof. If we let S = 0 in Lemma 13, then we get the result. ■

Lemma 15 For two random υectors X and W, define the kernel function H

with a positive definite matrix R = ( p × p), and the function

where the matrix T = (p × p) is defined by a matrix equation

Then ( ) of (43) is a solution of an integral equation

Proof. Lemma 14 shows that a conditional distribution = N(Q , R ) and a marginal distribution (w) = N(0, T ) yield a marginal distribu- tion ( ) = N(0, R + QTQ' ). In other words, Lemma 14 implies that if we

(9)

let H( , w) = N(Qw, R ), (w) = G(w : T ) = N(0, T ) and ( ) = G(w : R + QTQ' ) = N(0, R + QTQ' ), then an integral equation

holds. Therefore, if we can find T satisfying (44), then the equation (46) is rewritten as

Letting ( ) = G( : R + QTQ' ), (47) turns out to be (45). And it is seen that ( ) = G( : R + QTQ' ) = N(0, R + QTQ' ) is a solution for (45) if T satisfies (44). Constant times of this ( ) can be a solution for the inte- gral equation, but they can not be a density function. ■

It is shown that solving the integral equation (47) becomes solving the matrix equation (44). The equation (44) takes a form of X-UXV = W, so the results obtained about this matrix equation in Section 6 will be used in the next subsection.

7.2 Derivation of the marginal distribution

Based on the previous results, we obtain one of the main propositions.

It might be helpful to repeat that the conditional distribution of X given Y

= is (25) = N( + B( - ), Σ11), and the conditional distribution of Y given X = x is (26) = N( + ( - ), Σ22). In addition, it is assumed p ≦ q. These are our starting point.

Lemma 16 For the parameters of (25) and (26), B and B are assumed to be diagonalizable with the characteristic roots being λ1, ... , λp for B . Furthermore, it is assumed that the characteristic roots satisfy the assumption 1 > |λiλj|, (i, j = 1, ... , p), and that Σ11 and Σ22 are positive definite. Then we have the propositions (a) and (b) below.

(a) The marginal distribution of X consistent with (25) and (26) is ( )

= N(0, 11). Here 11 is a solution of the matrix equation

(10)

where P is a non-singular matrix diagonalizing B .

(b) The marginal distribution of Y consistent with (25) and (26) is h( ) = N(0, 22). Here 22 is a solution of the matrix equation

so that 22 satisfies

where Q is a non-singular matrix diagonalizing B.

Proof. (a) Assume that = 0, = 0. Given (25) and (26), we get from Lemma 13 that the kernel function H of the integral equation (27) is given as

where

For the integral equation (27), we have, from Lemma 15, that the function

is a solution for (27), if 11 is a solution of the matrix equation

参照

関連したドキュメント

The first paper, devoted to second order partial differential equations with nonlocal integral conditions goes back to Cannon [4].This type of boundary value problems with

– Solvability of the initial boundary value problem with time derivative in the conjugation condition for a second order parabolic equation in a weighted H¨older function space,

In our previous papers, we used the theorems in finite operator calculus to count the number of ballot paths avoiding a given pattern.. From the above example, we see that we have

Using the multi-scale convergence method, we derive a homogenization result whose limit problem is defined on a fixed domain and is of the same type as the problem with

In this last section we construct non-trivial families of both -normal and non- -normal configurations. Recall that any configuration A is always -normal with respect to all

“Breuil-M´ezard conjecture and modularity lifting for potentially semistable deformations after

His idea was to use the existence results for differential inclusions with compact convex values which is the case of the problem (P 2 ) to prove an existence result of the

Classical Sturm oscillation theory states that the number of oscillations of the fundamental solutions of a regular Sturm-Liouville equation at energy E and over a (possibly