the solution of the equation is given by
Here tij is the (i, j)-element of T defined as
and P and R are non-singular matrices diagonalizing U and V respectively.
(b) Under the assumption (29), the sotution (30) is unique.
(c) Under the assumption
the matrix series
converges and is equal to the solution (30).
(d) When m= n and U = V' and W is positive definite, X of (33) with U = V' is positive definite under the assumption (32).
Proof. (a) Under the assumption (29), X of (30) is defined. Substituting this X into the left hand side of the equation (28), we have
Here Λ = PUP-1 and M = RVR-1, and Λ is a diagonal matrix with
showing that (30) is a solution for (28).
(b) Suppose that two solutions X1 and X2 are proposed. So, we have two equations X1-UX1V = W and X2-UX2V = W hold. Subtracting each other, we get (X1-X2)-U(X1-X2)V = 0 or
where Z = X1-X2. And (35) can be transformed into
If we let Q = [ qij] = P-1ZR or PQR-1 = Z, and make use of relations Λ = P-1UP and M = R-1VR, then the last equality of (36) is transformed into an equation for Q,
which is rewritten as
or
This gives a solution
under the assumption (29) 1≠λivj. So we have Z = PQR-1 = P 0 R-1 = 0.
This implies X1-X2 = 0, and therefore the uniqueness follows.
(c) If we substitute (33) into the left hand side of the equation (28), then provided that (33) converges, we have
So it is seen that (33) is a solution for (28). Making use of P and R men- tioned above, (33) is transformed into
In view of (31) T = PWR-1, (37) is rewritten,
and the series (38) is rewritten as
by letting Yk = ΛkTM k.
Now, if we consider the series
then the ij, the (i, j)-element of Y, is given as
which converges under the assumption (32) 1 > |λivj|, and the converg- ing value is
Therefore the series (39) converges under the assumption (32), and the converging value is
So it is seen that the converging value of (37) is P-1Y *R and the converg- ing value of (38) or (37) is equal to (30).
(d) The result (c) holds under the assumption (32) 1 >|λivj| for the case where m = n and U = V' and vi = λj, (i =1, ... , m). So, we have that the series (33)
with U = V', converges under the assumption (32).
If we consider the quadratic form ' X of X for a non-zero vector = [ 1, ... , m ], then
This series also converges under the assumption (32). Letting k = U k , we have from (40),
Having assumed that W is positive definite, the first term of the right hand of (41) is positive, and the other terms are at least non-negative.
So, the right hand side of (41) and 'X are positive. ■ 7 Consistent Multivariate Distribution
In this section we make use of the results in Section 6 to obtain mul- tivariate normal distributions consistent with given conditional normal distributions.
Lemma 13 Consider three random υectors X = (p × 1), Y = (q × 1) and Z = (r × 1). Letting their distributions be = N(Q , R) and = N(Sz, T), we have = N(QSz, R + QTQ' ).
Proof. The density function of the distributions are
where 1 = (2 )-p/2|R|-1/2 and 2 = (2 )-q/2|T|-1/2. Then, we have
where
Integrating this with respect to , we obtain
In view of Lemma 9 (e), we get
After rearrangement, we obtain
where
Furthermore, letting = R + QTQ', is also positive definite given R and T being positive definite. In view of Lemma 9 (b),
In view of Lemma 9 (a),
and
we have
which is the density function of N(QSz, R + QTQ' ). ■
Lemma 14 Letting the conditional distribution of X = ( p × 1 ) be
= N(Qw, R) and the marginal distributin of W = ( q × 1 ) = N(0, T ), we have ( ) = N(0, R + QTQ' ).
Proof. If we let S = 0 in Lemma 13, then we get the result. ■
Lemma 15 For two random υectors X and W, define the kernel function H
with a positive definite matrix R = ( p × p), and the function
where the matrix T = (p × p) is defined by a matrix equation
Then ( ) of (43) is a solution of an integral equation
Proof. Lemma 14 shows that a conditional distribution = N(Q , R ) and a marginal distribution (w) = N(0, T ) yield a marginal distribu- tion ( ) = N(0, R + QTQ' ). In other words, Lemma 14 implies that if we
let H( , w) = N(Qw, R ), (w) = G(w : T ) = N(0, T ) and ( ) = G(w : R + QTQ' ) = N(0, R + QTQ' ), then an integral equation
holds. Therefore, if we can find T satisfying (44), then the equation (46) is rewritten as
Letting ( ) = G( : R + QTQ' ), (47) turns out to be (45). And it is seen that ( ) = G( : R + QTQ' ) = N(0, R + QTQ' ) is a solution for (45) if T satisfies (44). Constant times of this ( ) can be a solution for the inte- gral equation, but they can not be a density function. ■
It is shown that solving the integral equation (47) becomes solving the matrix equation (44). The equation (44) takes a form of X-UXV = W, so the results obtained about this matrix equation in Section 6 will be used in the next subsection.
7.2 Derivation of the marginal distribution
Based on the previous results, we obtain one of the main propositions.
It might be helpful to repeat that the conditional distribution of X given Y
= is (25) = N( + B( - ), Σ11), and the conditional distribution of Y given X = x is (26) = N( + ( - ), Σ22). In addition, it is assumed p ≦ q. These are our starting point.
Lemma 16 For the parameters of (25) and (26), B and B are assumed to be diagonalizable with the characteristic roots being λ1, ... , λp for B . Furthermore, it is assumed that the characteristic roots satisfy the assumption 1 > |λiλj|, (i, j = 1, ... , p), and that Σ11 and Σ22 are positive definite. Then we have the propositions (a) and (b) below.
(a) The marginal distribution of X consistent with (25) and (26) is ( )
= N(0, 11). Here 11 is a solution of the matrix equation
where P is a non-singular matrix diagonalizing B .
(b) The marginal distribution of Y consistent with (25) and (26) is h( ) = N(0, 22). Here 22 is a solution of the matrix equation
so that 22 satisfies
where Q is a non-singular matrix diagonalizing B.
Proof. (a) Assume that = 0, = 0. Given (25) and (26), we get from Lemma 13 that the kernel function H of the integral equation (27) is given as
where
For the integral equation (27), we have, from Lemma 15, that the function
is a solution for (27), if 11 is a solution of the matrix equation