NONDENSELY DEFINED MINIMAL SYMMETRIC OPERATORS
I. PARASSIDIS AND P. TSEKREKOS Received 13 May 2004
LetA0be a closed, minimal symmetric operator from a Hilbert spaceHintoHwith do- main not dense inH. LetAalso be a correct selfadjoint extension ofAo. The purpose of this paper is (1) to characterize, with the help ofA, all the correct selfadjoint extensions BofA0with domain equal toD(A), (2) to give the solution of their corresponding prob- lems, (3) to find sufficient conditions forBto be positive (definite) whenAis positive (definite).
1. Introduction
Minimal symmetric operators arise naturally in boundary value problems where they represent differential operators with all their defects, that is, their range is not the whole space and also their domain cannot be dense in the whole space. For example, the op- eratorA0 defined by the problemA0y=iy, y(0)=y(1)=0 is a minimal symmetric nondensely defined operator. The problem of finding all correct selfadjoint extensions of a minimal symmetric operator is not either easy or always possible. The whole problem is facilitated when the domain of definition of the minimal symmetric operator is dense.
Correct extensions of densely defined minimal not necessarily symmetric operators in Hilbert and Banach spaces have been investigated by Vishik [17], Dezin [3], Otelbaev et al. [10], O˘ınarov and Parasidi [14], and many others. Correct selfadjoint extensions of a densely defined minimal symmetric operatorA0have been studied by a number of au- thors as J. Von Neumann [13], Koˇcube˘ı [7], Mikha˘ılets [12], and V. I. Gorbachuk and M.
L. Gorbachuk [5]. They described the extensions as restrictions of some operators, usually of the adjoint operatorA∗0 ofA0. In this paper, we attack the above problem, developing a method which does not depend on maximal operators, but only on the existence of some correct selfadjoint extension ofA0. The essential ingredient in our approach is the extension of the main idea in [14]. More precisely, we show (Theorem 3.2) that every cor- rect selfadjoint extension of a minimal operator is uniquely determined by a vector and a Hermitian matrix (see the comments precedingTheorem 3.2).
In [1,2,9,8] extensions of nondensely defined symmetric operators by embeddingH in a spaceXin which the operatorA0is dense were studied. The class of extensions they
Copyright©2005 Hindawi Publishing Corporation Abstract and Applied Analysis 2005:7 (2005) 767–790 DOI:10.1155/AAA.2005.767
consider is much wider than ours, but they do not consider correct selfadjoint extensions.
Our method does not require such an embedding and applies equally well to positive correct selfadjoint extensions. Positive selfadjoint extensions of densely defined positive symmetric operators have been considered by Friedrichs [4].
As a demonstration of the theory developed in this paper, we give here all the cor- rect selfadjoint extensions of the minimal operator A0 in the example mentioned in the beginning of the introduction. These are the operatorsB:L2(0, 1)→L2(0, 1),Bu= iu−ct01 tu(t)dtwithD(B)= {u∈H(0, 1) :u(0)= −u(1)}, wherecis any real num- ber.
The paper is organized as follows. InSection 2, we recall some basic terminology and notation about operators. In Section 3, we prove the main general results. Finally, in Section 4, we discuss several examples of integrodifferential equations which show the usefulness of our results.
2. Terminology and notation
ByH, we will always denote a complex Hilbert space with inner product (·,·). The op- erators (linear) fromHintoHwe refer to are not everywhere defined onH. We write D(A) andR(A) for the domain and the range of the operatorA, respectively. Two oper- atorsA1andA2are said to beequalifD(A1)=D(A2) andA1x=A2x, for allx∈D(A1).
A2 is said to be an extensionof A1, orA1 is a restriction of A2, in symbolA1⊂A2 if D(A2)⊇D(A1) andA1x=A2x, for allx∈D(A1). We notice that ifA⊂BandA−1,B−1 exist, thenA−1⊂B−1. An operatorA0:H→His calledclosedif for every sequencexnin D(A) converging tox0 withAxn→ f0, it follows thatx0∈D(A) andAx0=f0. A closed operatorA0:H→His calledminimalifR(A0)=Hand the inverseA−01exists onR(A0) and is continuous.Ais calledmaximalifR(A)=Hand kerA= {0}. An operatorAis calledcorrectifR(A) =Hand the inverseA−1exists and is continuous. An operatorAis called acorrect extension(resp., restriction) of the minimal (resp., maximal) operatorA0
(resp.,A) if it is a correct operator andA0⊂A(resp.,A⊂A).
LetAbe an operator with domainD(A) dense inH. TheadjointoperatorA∗:H→H of A with domainD(A∗) is defined by the equation (Ax,y)=(x,A∗y) for every x∈ D(A) and every y∈D(A∗). The domainD(A∗) ofA∗consists of all y∈Hfor which the functionalx→(Ax,y) is continuous onD(A). An operatorAis calledselfadjointif A=A∗andsymmetricif (Ax,y)=(x,Ay) for allx,y∈D(A). We note that, in the case in whichD(A)=H,Ais symmetric ifA⊂A∗. A symmetric operatorAis said to bepositive if (Ax,x)≥0 for everyx∈D(A) andpositive definiteif there exists a positive real number ksuch that (Ax,x)≥kx2, for allx∈D(A).
ThedefectdefA0 of an operatorA0is the dimension of the orthogonal complement R(A0)⊥of its rangeR(A0).
Let F=(F1,. . .,Fm) be a vector of Hm and AF=(AF1,. . .,AFm). We write Ft and (Ax,Ft) for the column vectors col(F1,. . .,Fm) and col((Ax,F1),. . ., (Ax,Fm)), respectively.
We denote by (AFt,F) them×mmatrix whosei,jth entry is the inner product (AFi,Fj) and byMt the transpose matrix ofM. We denote byI and0the identity and the zero matrix, respectively.
3. Correct selfadjoint extensions of minimal symmetric operators
Throughout this paper,A0will denote a nondensely defined symmetric minimal operator andAa correct selfadjoint extension ofA0. LetEcs(A0,A) denote the set of all correct selfadjoint extensions ofA0with domain D(A) and let Emcs(A0,A) denote the subset of Ecs(A0,A) consisting of all B∈Ecs(A0,A) such that dim R(B−A) =m.
We begin with the following key lemma.
Lemma3.1. For everyB∈Emcs(A0,A), there exists a vector F=(F1,. . .,Fm), whereF1,. . .,Fm
are linearly independent elements ofD(A) ∩R(A0)⊥and a Hermitian invertible matrixT= ti jmi,j=1such that
Bx=Ax − AFT W−1x,AF t, ∀x∈D A, (3.1) whereW=I+ (AF t,F)T, withdetW=0.
Proof. LetB∈Ecsm(A0,A). Then dimR(B −A) =m. The main result of [10] implies that there exists a linear continuous operatorK:H→D(A) with D(K)=H, kerK⊇R(A0), Ker(A−1+K)= {0}such that
B−1=A−1+K or K=B−1−A−1. (3.2) HenceK=K∗, sinceB−1andA−1are selfadjoint operators. SinceA0is a minimal oper- ator, it follows thatR(A0) is a closed subspace ofH, and so
H=R(A0)⊕R(Ao)⊥. (3.3)
We will show that dimR(K)=m. Indeed, from (3.2), it follows thatK f =B−1f−A−1f for all f ∈H. Letx=B−1f. Then,
x=A−1f+K f, Ax =f +AK f , (3.4) from which it follows that (A−B)x=A(K f ), for allf ∈H. Since dimR(A−B)=mand the operatorAis invertible, we have dimR(K)=m. Therefore, the selfadjointness ofK gives the decomposition
H=kerK⊕R(K). (3.5)
From decompositions (3.3), (3.5), and the inclusion kerK⊇R(A0), we conclude that
R(K)⊆R(A0)⊥. (3.6)
Fix a basis{F1,F2,. . .,Fm}ofR(K). Then, for every f inH, there areαiinRsuch that K f =
m i=1
αiFi. (3.7)
Let{ψ1,ψ2,. . .,ψm}be the biorthogonal family of elements ofH corresponding to the above basis ofR(K), that is, (ψi,Fj)=δi j,i,j=1,. . .,m. From (3.7), we have (K f,ψj)= (mi=1αiFi,ψj)=m
i=1αi(Fi,ψj)=αj,j=1, 2,. . .,m. Hence, K f =
m i=1
K f,ψiFi= m i=1
f,KψiFi, ∀ f ∈H. (3.8)
In particular, forf =ψj, we have Kψj=
m i=1
ψj,KψiFi, or equivalently, Kψi= m l=1
ψi,KψlFl. (3.9)
Replacing the above expression forKψjin (3.8), we obtain K f =
m i=1
f,
m l=1
ψi,KψlFl
Fi= m i=1
m l=1
f,FlKψl,ψiFi. (3.10)
IfTdenotes the matrix(Kψl,ψi)ml,i=1, then (3.10) takes the form
K f =FTf,Ft=FTFt,f. (3.11) Now, the reader can easily verify that each of the matricesTand (AF t,F) is a Hermitian matrix. We claim thatT is invertible. LetK=K|R(K)denote the restriction ofK to its range. From (3.5), it follows that kerK∩R(K)= {0}. Therefore, kerK= {0}. Substitut- ing f =Fjinto (3.11), we obtain
KFj=FT(Ft,Fj) or KF=FT(Ft,F). (3.12) The determinant det(Ft,F) is nonzero, being the determinant of the Gramm matrix (Ft,F) ofF. Since the vectors ofR(K)F1,F2,. . .,Fmare linearly independent and kerK= {0}, it follows that detT=0, which proves our claim.
We now prove the formula (3.1) which describes the action of the operatorBonx.
From (3.4) and (3.11), we have
Ax =f +AFT (Ft,f). (3.13)
Then, taking the inner product withFt, we get
Ax,Ft= AFT (Ft,f),Ft+f,Ft
=(Ft,AF)T (Ft,f) +f,Ft
=
f,Ft+ (Ft,AF )T(f,Ft)
=
I+ (AF t,F)T(f,Ft).
(3.14)
LetWdenote the matrixI+ (AF t,F)T. We will show that detW=0. For if detW= 0, then detWt=0. Hence, there exists a nonzero vector a=col(a1,. . .,am) such that Wta=o. We consider the linear combination f0=m
i=1aiAF i. Since the vectorsF1,. . .,Fm
are linearly independent and kerA= {0}, their imagesA(F i) underAare linearly inde- pendent as well. It follows that f0=0. Combining (3.4) and (3.11), we getx=A−1f + FT(f,Ft), wherex=B−1f. In particular, forx=B−1f0, we compute
x0=A−1f0+FTf0,Ft
=A−1 m i=1
αiAF i+FT m
i=1
αiAF i,Ft
=Fa+FT m i=1
αi(Ft,AF i)
=Fa+FT m i=1
αi(AF t,Fi)
=FIa+FT(AF t,F)a
=FI+T(AF t,F)a
=FWta.
(3.15)
In the above chain of equalities, the last one follows from the definition ofW and the fact that the matricesT and (AF t,F) are Hermitian. ButWta=0. This implies that the nonzero vector f0is contained in the kernel kerB−1ofB−1, contradicting the correctness ofB. So detW=0. Now (3.14) gives (f,Ft)=W−1(x,AF t), which with (3.13) implies
formula (3.1).
We now prove our main theorem which describes the setEcsm(A0,A) of all correct self- adjoint extensionsBof an operatorA0withD(B)=D(A) and dim R(B−A) =m, using one correct selfadjoint extensionAof a minimal symmetric operatorA0with defA0≤ ∞. Every operatorB is uniquely determined by a vectorF with componentsFi∈D(A) ∩ R(A0)⊥,i=1,. . .,m, and a Hermitianm×mmatrixCwith rankC=n≤m, satisfying condition (3.16) which is the solvability condition for the problemBx= f (whose solu- tion is also given in the following result).
Theorem3.2. Suppose thatA0,Aare as inLemma 3.1. Then the following hold.
(i)For everyB∈Ecsm(A0,A), there exists a vector F=(F1···Fm), whereF1,. . .,Fm are linearly independent elements fromD(A) ∩R(A0)⊥and a Hermitianm×mmatrixCwith detC=0, such that
detI−(AF t,F)C=0, (3.16)
Bx=Ax − AFC Ax,Ft= f . (3.17)
(ii)Conversely, for every vectorF=(F1···Fm), whereF1,. . .,Fmdefined as above, and Hermitianm×mmatrixC, which has rankC=n≤mand satisfies (3.16), the operatorB
defined by (3.17) belongs toEncs(A0,A). The unique solution of (3.17) is given by the formula x=B−1f =A−1f+FCI−(AF t,F)C−1f,Ft ∀ f ∈H. (3.18) Proof. (i) LetB∈Emcs(A0,A). Then by Lemma 3.1, there exists a Hermitian, invertiblem× mmatrixT=(ti j), and vectorF=(F1,. . .,Fm), whereF1,. . .,Fmare linearly independent elements fromD(A) ∩D(A0)⊥such that detW=0 and (3.1) holds true. From (3.1), since B=B∗, for everyy∈D(B∗)=D(B)=D(A), we have
(Bx,y)= Ax− AFT W−1x,AF t,y= Ax,y− AF,yT W−1x,AF t
=
x,Ay − x, (AF ,y)T W−1 AFt=
x,Ay −
y,AF TW−1 AFt= x,B∗y.
(3.19) Hence,
B∗y=Ay −
y,AF TW−1 AFt=Ay − AFTW−1ty,AF t. (3.20) We denote byCthe matrixT W−1. SinceB=B∗, relations (3.1), (3.20) imply that
C=T W−1=
TW−1t=Ct. (3.21)
Hence the matrixCis Hermitian and so (3.1) implies (3.17). The invertibility ofC is implied by the fact that T and W−1 are invertible matrices. To show (3.16), we first remember that them×mmatrix (AF t,F)=D=(di j) is Hermitian. FromC=TW−1, we takeT=CW=C(I+DT) orC=(I−CD)T.SinceCandTare invertible, it follows that det(I−CD)=0, and we finally have that det(I−DC)=0, that is, (3.16) is fulfilled.
(ii) We will show thatB∈Encs(A0,A). We first show that Bis a correct extension ofA0. Taking into account (3.17), we have
Ft,f=
Ft,Ax − AFC Ax,Ft
=
I− AFt,FC(Ax,F t), (3.22) or
I−(AF t,F)CAx,F t=
f,Ft. (3.23)
From (3.16), we have
Ax,Ft=
I−(AF t,F)C−1f,Ft. (3.24)
SinceAis invertible, (3.17) implies that
x−FC Ax,Ft=A−1f, f =Bx, (3.25) and because of (3.24), we have
x=A−1f +FCI−(AF t,F)C−1f,Ft, ∀ f ∈H, (3.26) which is (3.18).
SinceA−1is continuous onH,B−1 is continuous onH. From (3.18), it is clear that D(B)=D(A) ⊇D(A0). SinceA0⊂AandFi∈R(A0)⊥,i=1,. . .,m, it follows from (3.17) thatBx=Ax =A0x, for allx∈D(A0).
So,A0⊂Band since B−1exists and is continuous onH,Bis a correct extension of A0. From (3.17), because of rankC=nandAF 1,. . .,AF m being linearly independent, it follows that dimR(B−A) =n.
It remains to show thatB=B∗.
Taking into account (3.17) fory∈D(A), we have (Bx,y)= Ax,y− AFC Ax,Ft,y=
x,Ay − AF,yC Ax,Ft
=
x,Ay − x, (AF ,y)C AFt=
x,Ay −
y,AF C AFt=(x,φ). (3.27)
It follows thaty∈D(B∗) andD(A) =D(B)⊆D(B∗). But fory∈D(A), we have B∗y=φ=Ay −
y,AF C AFt=Ay − AFC Ay,Ft=B y. (3.28) HenceB⊂B∗. Let nowy∈D(B∗). From (3.17), we have
(Bx,y)= Ax,y− AFC Ax,Ft,y= Ax,y− AF,yC Ax,Ft
= Ax, y−(AF, y)CFt=
x,B∗y. (3.29)
So,y−(AF, y)CFt∈D(A∗)=D(A) =D(B) and sinceF1,. . .,Fm∈D(A), it follows that y∈D(A). Hence, D(B∗)=D(A) =D(B) andB=B∗. So the theorem has been proved.
In the next particular case whenFi∈D(A0)∩R(A0)⊥,i=1,. . .,m, the condition (3.16) is fulfilled automatically and the solution ofBx=f is simpler.
Corollary3.3. For every vectorF=(F1···Fm), whereF1,. . .,Fmare linearly independent elements fromD(A0)∩R(A0)⊥, and for every Hermitianm×mmatrixCwith rankC= n≤m, the operatorBdefined by (3.17) belongs toEncs(A0,A).
The unique solution of (3.17) is given by
x=B−1f =A−1f+FCf,Ft, ∀ f ∈H. (3.30) Proof. Indeed, ifFi∈D(A0)∩R(A0)⊥,i=1,. . .,m, then (AF i,Fj)=(A0Fi,Fj)=0 for all i,j=1,. . .,m, sinceFj∈R(A0)⊥,j=1,. . .,m. Hence (AF t,F)=0. The rest easily follows
from the above theorem.
Remark 3.4. For everyB∈Emcs(A0,A) from (3.2) and (3.6), we have RB−1−A−1⊆RA0
⊥
, dimR(B−A) =m≤defA0. (3.31) Let now the minimal operatorA0have finite defect defA0=dimR(A0)⊥=m. Then D(A0) can be defined as follows:
DA0
=
x∈D A: Ax,Ft=0, (3.32)
whereF=(F1···Fm),F1,. . .,Fmare linearly independent elements ofR(A0)⊥∩D(A). So if we have chosen the elementsF1,. . .,Fmso that (3.32) holds, then everyBfromEmcs(A0,A) is defined only by the Hermitian matrixCand we can restateTheorem 3.2as follows.
Theorem3.5. (i)For everyB∈Emcs(A0,A), where A0satisfies (3.32), there exists a Hermit- ianm×mmatrixCwithdetC=0, such that (3.16) and (3.17) are fulfilled.
(ii) Conversely, for every Hermitianm×mmatrixC, which satisfies (3.16) and rank C=n, the operatorBdefined by (3.17) belongs toEncs(A0,A). The unique solution of (3.17) is given by (3.18).
Proof. From (3.32), we have RA0
=
f ∈H:f,Fi
=0,i=1,. . .,m. (3.33) It is evident that dimR(A0)⊥=m and {F1,. . .,Fm} is a basis of R(A0)⊥. Then from dimR(A0)⊥=m, dimR(K)=m, and (3.6), it follows that
R(K)=RB−1−A−1=R(A0)⊥. (3.34) As basis ofR(K), we can takeF1,. . .,Fm.The rest is proved similarly.
Remark 3.6. For everyB∈Emcs(A0,A), where A0satisfies (3.32), we haveR(B−1−A−1)= R(A0)⊥and dimR(B−A) =defA0.
Remark 3.7. The operators B∈Ecsm(A0,A) in both cases of either def A0=m <∞ or defA0= ∞are described by the same formulas (3.16) and (3.17).
Remark 3.8. LetA0be defined by (3.32) or (3.33), andF=(F1,. . .,Fm), whereF1,. . .,Fm are linearly independent elements ofR(A0)⊥∩D(A). Then,
AFt,F=0⇐⇒Fi∈D(A0), i=1,. . .,m. (3.35) Let now the minimal symmetric operatorA0be defined by
A0⊂A, D(A0)=
x∈D A: Ax,Fi
=0, Fi∈D(A0), (3.36) i=1,. . .,m, andF1,. . .,Fm are linearly independent elements ofD(A0). Then from the above remark andTheorem 3.5follows the next corollary, which describes the most “sim- ple” extensions ofA0.
Corollary3.9. (i)For everyB∈Emcs(A0,A), where A0satisfies (3.36), there exists a Her- mitianm×mmatrixCwithdetC=0, such that (3.17) is fulfilled.
(ii)Conversely, for every Hermitianm×mmatrixC, with rankC=n≤m, the operator Bdefined by (3.17) belongs toEcsn(A0,A).
The unique solution of (3.17) is given by (3.30).
The next theorem is useful for applications and gives the criterion of correctness of below problems and their solutions.
Theorem3.10. Let
Bx=Ax − AFC Ax,Ft=f, D(B)=D A, (3.37) whereAas inLemma 3.1,Ca Hermitianm×mmatrix with rankC=n,F1,. . .,Fmlinearly independent elements ofD(A). Then Bis correct and selfadjoint operator withdimR(B− A) =nif and only if
detI−(AF t,F)C=0, (3.38)
and the unique solution of (3.37) is given by
x=B−1f =A−1f+FCI−(AF t,F)C−1f,Ft. (3.39) Proof. We define corresponding to this problem the minimal operatorA0as a restriction ofAby (3.32).
Ifn=m, then the theorem is true byTheorem 3.5.
While ifn < mandB∈Encs(A0,A), then from (3.37), we have Bx=f and Ft,f=
Ft,Ax −
Ft,AF C(Ax,F t)
=
I− AFt,FCFt,Ax (3.40)
or
I−(AF t,F)CAx,F t=
f,Ft, ∀ f ∈H. (3.41) LetL=I−(AF t,F)Cand rankL=k < m. If we suppose that the firstklines of the ma- trixL are linearly independent, then for f =ψk+1, where (Fi,ψk)=δi,k,i,k=1,. . .,m, the systemL(Ax,F t)=(f,Ft) has no solution, since the rank of the augmented matrix isk+ 1=k. Then Bx=ψk+1 has no solution andR(B)=H. Consequently, Bis not a correct operator. So (3.38) holds true. Conversely, let detL=0, then byTheorem 3.5, we
have thatB∈Encs(A0,A).
We recall that a Hermitianm×mmatrixC=(ci j) is callednegative semidefinite (neg- ative definite)ifmi=1mj=1ξiξjci j≤0,
m
i=1
m j=1
ξiξjci j<0
, ∀ξ=
ξ1,. . .,ξm∈Cm
ξ∈Cm\{0}
. (3.42)
Theorem3.11. If inTheorem 3.2Ais positive operator andCis negative semidefinite ma- trix, thenB, defined by (3.17), is a positive operator.
Proof. We will show that (Bx,x)≥0 for allx∈D(B).
(Bx,x)= Ax− AFC Ax,Ft,x= Ax,x− AF,xC Ax,Ft
= Ax,x−(Ax,F)C Ax,Ft= Ax,x− m i=1
m j=1
(Ax,F i) Ax,Fj
ci j≥0, (3.43)
forCis negative and semidefinite.
We remind that an operatorA:H→His calledpositive definiteif there exists a positive real numberksuch that
Ax,x≥kx2, ∀x∈D A. (3.44)
Theorem3.12. If the operatorAinTheorem 3.2is positive definite, then the operatorB, which is defined by the relation (3.17), is positive definite whenever the matrixCis Hermit- ian and satisfies the inequality
k >
m i=1
m j=1
AFi AFj ci j (3.45)
and positive whenk≥m
i=1
m
j=1AF iAF j |ci j|.
Proof. Forx∈D(B), we have
(Bx,x)= Ax− AFC Ax,Ft,x= Ax,x−(x,AF)C x,AF t
≥kx2− m i=1
m j=1
x,AF i
x,AF j ci j
≥
k− m i=1
m j=1
AFi AFjci j x2.
(3.46)
The theorem now easily follows.
Now we will stateTheorem 3.2, in the following more general form, which is useful in the solutions of differential equations.
Theorem3.13. Suppose thatA0,Aare as inTheorem 3.2. Then the following hold.
(i)For everyB∈Emcs(A0,A), there exists a vector Q=(q1,. . .,qm), whereq1,. . .,qm are linearly independent elements fromD(A0)⊥and a Hermitian invertiblem×mmatrixC, such that
detI−(Qt,A−1Q)C=0, (3.47) Bx=Ax −Q Cx,Qt=f, D(B)=D A. (3.48) (ii)Conversely, for every vectorQ=(q1,. . .,qm), defined as above, and Hermitianm×m matrixC, which has rankC=nand satisfies (3.47), the operatorBdefined by (3.48) belongs toEncs(A0,A).
The unique solution of (3.48) is given by the formula
x=A−1f + A−1QCI−(Qt,A−1Q)C−1f,A−1Qt (3.49)
for all f ∈H.
The proof easily follows fromTheorem 3.2by substitutingQ=AF, F=A−1Q, where Q=(q1,. . .,qm),qi∈D(A0)⊥,i=1,. . .,m.
Corollary3.14. For every vectorQ=(q1,. . .,qm), whereq1,. . .,qmare linearly indepen- dent elements ofD(A0)⊥∩R(A0),i=1,. . .,m, and for every Hermitianm×mmatrixC, with rankC=n, the operatorBdefined by (3.48) belongs toEncs(A0,A)
The unique solution of (3.48) is given by the formula
x=B−1f =A−1f + A−1QCf,A−1Qt, ∀f ∈H. (3.50)
Let now the minimal symmetric operatorA0have finite defect and be defined by the relations
A0x=Ax, ∀x∈D(A0), D(A0)=
x∈D(A) : (x,Q t)=0, (3.51) whereQis defined as inTheorem 3.13. Then dimD(A0)⊥=mand defA0=dimR(A0)⊥= m.
In this case, we restate Theorems3.5,3.11, and3.12in the following more general form.
Theorem3.15. (a)For everyB∈Emcs(A0,A), where A0 is defined by (3.51), there exists a Hermitianm×mmatrixCwithdetC=0, such that (3.47) and (3.48) are fulfilled.
(b)Conversely, for every Hermitianm×mmatrixC, which satisfies (3.47) and has rank C=n, the operatorBdefined by (3.48) belongs toEcsn(A0,A). The unique solution of (3.48) is given by (3.49).
(c)If the operatorAis positive and the matrixCis negative semidefinite, thenBis posi- tive.
(d)IfAis positive definite (so it satisfies a relation (3.44)) and ifCis a Hermitianm×m matrix which satisfies the inequality
k >
m i=1
m j=1
qiqjci j, (3.52)
thenBis positive definite; it is positive whenk≥m
i=1
m
j=1qiqj|ci j|,
Proof. Since A is selfadjoint and R(A) =H, for every x∈D(A), we have (x, qi)= (x,AA−1qi)=(Ax,F i), whereFi=A−1qi,i=1, 2,. . .,m.
It is clear thatFi∈D(A), i=1, 2,. . .,m, and that they are linearly independent. If we substituteQ=AF ,Qt=AF t in (3.47), (3.48), and (3.49), then we receive the relations (3.16), (3.17), and (3.18) ofTheorem 3.2, which hold true. Because of Theorems 3.11, 3.12, and the relationsqi=AF i,i=1, 2,. . .,m, cases (c) and (d) of the present theorem
are true.
Remark 3.16. Suppose that A0,A are as inTheorem 3.15and Q=(q1,. . .,qm), where q1,. . .,qmare linearly independent elements ofD(A0)⊥, then
R(A0)=
f ∈H:f,A−1qj
=0, j=1,. . .,m,
(Q,A−1Q)=0⇐⇒qi∈R(A0), i=1,. . .,m. (3.53) Let now the minimal symmetric operatorA0be defined by the relation
A0⊂A, D(A0)=
x∈D A:x,Qt=0,Q∈R(A0)m (3.54)