• 検索結果がありません。

NONDENSELY DEFINED MINIMAL SYMMETRIC OPERATORS

N/A
N/A
Protected

Academic year: 2022

シェア "NONDENSELY DEFINED MINIMAL SYMMETRIC OPERATORS"

Copied!
24
0
0

読み込み中.... (全文を見る)

全文

(1)

NONDENSELY DEFINED MINIMAL SYMMETRIC OPERATORS

I. PARASSIDIS AND P. TSEKREKOS Received 13 May 2004

LetA0be a closed, minimal symmetric operator from a Hilbert spaceHintoHwith do- main not dense inH. LetAalso be a correct selfadjoint extension ofAo. The purpose of this paper is (1) to characterize, with the help ofA, all the correct selfadjoint extensions BofA0with domain equal toD(A), (2) to give the solution of their corresponding prob- lems, (3) to find sufficient conditions forBto be positive (definite) whenAis positive (definite).

1. Introduction

Minimal symmetric operators arise naturally in boundary value problems where they represent differential operators with all their defects, that is, their range is not the whole space and also their domain cannot be dense in the whole space. For example, the op- eratorA0 defined by the problemA0y=iy, y(0)=y(1)=0 is a minimal symmetric nondensely defined operator. The problem of finding all correct selfadjoint extensions of a minimal symmetric operator is not either easy or always possible. The whole problem is facilitated when the domain of definition of the minimal symmetric operator is dense.

Correct extensions of densely defined minimal not necessarily symmetric operators in Hilbert and Banach spaces have been investigated by Vishik [17], Dezin [3], Otelbaev et al. [10], O˘ınarov and Parasidi [14], and many others. Correct selfadjoint extensions of a densely defined minimal symmetric operatorA0have been studied by a number of au- thors as J. Von Neumann [13], Koˇcube˘ı [7], Mikha˘ılets [12], and V. I. Gorbachuk and M.

L. Gorbachuk [5]. They described the extensions as restrictions of some operators, usually of the adjoint operatorA0 ofA0. In this paper, we attack the above problem, developing a method which does not depend on maximal operators, but only on the existence of some correct selfadjoint extension ofA0. The essential ingredient in our approach is the extension of the main idea in [14]. More precisely, we show (Theorem 3.2) that every cor- rect selfadjoint extension of a minimal operator is uniquely determined by a vector and a Hermitian matrix (see the comments precedingTheorem 3.2).

In [1,2,9,8] extensions of nondensely defined symmetric operators by embeddingH in a spaceXin which the operatorA0is dense were studied. The class of extensions they

Copyright©2005 Hindawi Publishing Corporation Abstract and Applied Analysis 2005:7 (2005) 767–790 DOI:10.1155/AAA.2005.767

(2)

consider is much wider than ours, but they do not consider correct selfadjoint extensions.

Our method does not require such an embedding and applies equally well to positive correct selfadjoint extensions. Positive selfadjoint extensions of densely defined positive symmetric operators have been considered by Friedrichs [4].

As a demonstration of the theory developed in this paper, we give here all the cor- rect selfadjoint extensions of the minimal operator A0 in the example mentioned in the beginning of the introduction. These are the operatorsB:L2(0, 1)L2(0, 1),Bu= iuct01 tu(t)dtwithD(B)= {uH(0, 1) :u(0)= −u(1)}, wherecis any real num- ber.

The paper is organized as follows. InSection 2, we recall some basic terminology and notation about operators. In Section 3, we prove the main general results. Finally, in Section 4, we discuss several examples of integrodifferential equations which show the usefulness of our results.

2. Terminology and notation

ByH, we will always denote a complex Hilbert space with inner product (·,·). The op- erators (linear) fromHintoHwe refer to are not everywhere defined onH. We write D(A) andR(A) for the domain and the range of the operatorA, respectively. Two oper- atorsA1andA2are said to beequalifD(A1)=D(A2) andA1x=A2x, for allxD(A1).

A2 is said to be an extensionof A1, orA1 is a restriction of A2, in symbolA1A2 if D(A2)D(A1) andA1x=A2x, for allxD(A1). We notice that ifABandA1,B1 exist, thenA1B1. An operatorA0:HHis calledclosedif for every sequencexnin D(A) converging tox0 withAxn f0, it follows thatx0D(A) andAx0=f0. A closed operatorA0:HHis calledminimalifR(A0)=Hand the inverseA01exists onR(A0) and is continuous.Ais calledmaximalifR(A)=Hand kerA= {0}. An operatorAis calledcorrectifR(A) =Hand the inverseA1exists and is continuous. An operatorAis called acorrect extension(resp., restriction) of the minimal (resp., maximal) operatorA0

(resp.,A) if it is a correct operator andA0A(resp.,AA).

LetAbe an operator with domainD(A) dense inH. TheadjointoperatorA:HH of A with domainD(A) is defined by the equation (Ax,y)=(x,Ay) for every x D(A) and every yD(A). The domainD(A) ofAconsists of all yHfor which the functionalx(Ax,y) is continuous onD(A). An operatorAis calledselfadjointif A=Aandsymmetricif (Ax,y)=(x,Ay) for allx,yD(A). We note that, in the case in whichD(A)=H,Ais symmetric ifAA. A symmetric operatorAis said to bepositive if (Ax,x)0 for everyxD(A) andpositive definiteif there exists a positive real number ksuch that (Ax,x)kx2, for allxD(A).

ThedefectdefA0 of an operatorA0is the dimension of the orthogonal complement R(A0)of its rangeR(A0).

Let F=(F1,. . .,Fm) be a vector of Hm and AF=(AF1,. . .,AFm). We write Ft and (Ax,Ft) for the column vectors col(F1,. . .,Fm) and col((Ax,F1),. . ., (Ax,Fm)), respectively.

We denote by (AFt,F) them×mmatrix whosei,jth entry is the inner product (AFi,Fj) and byMt the transpose matrix ofM. We denote byI and0the identity and the zero matrix, respectively.

(3)

3. Correct selfadjoint extensions of minimal symmetric operators

Throughout this paper,A0will denote a nondensely defined symmetric minimal operator andAa correct selfadjoint extension ofA0. LetEcs(A0,A) denote the set of all correct selfadjoint extensions ofA0with domain D(A) and let Emcs(A0,A) denote the subset of Ecs(A0,A) consisting of all BEcs(A0,A) such that dim R(BA) =m.

We begin with the following key lemma.

Lemma3.1. For everyBEmcs(A0,A), there exists a vector F=(F1,. . .,Fm), whereF1,. . .,Fm

are linearly independent elements ofD(A) R(A0)and a Hermitian invertible matrixT= ti jmi,j=1such that

Bx=Ax AFT W1x,AF t, xD A, (3.1) whereW=I+ (AF t,F)T, withdetW=0.

Proof. LetBEcsm(A0,A). Then dimR(B A) =m. The main result of [10] implies that there exists a linear continuous operatorK:HD(A) with D(K)=H, kerKR(A0), Ker(A1+K)= {0}such that

B1=A1+K or K=B1A1. (3.2) HenceK=K, sinceB1andA1are selfadjoint operators. SinceA0is a minimal oper- ator, it follows thatR(A0) is a closed subspace ofH, and so

H=R(A0)R(Ao). (3.3)

We will show that dimR(K)=m. Indeed, from (3.2), it follows thatK f =B1fA1f for all f H. Letx=B1f. Then,

x=A1f+K f, Ax =f +AK f , (3.4) from which it follows that (AB)x=A(K f ), for allf H. Since dimR(AB)=mand the operatorAis invertible, we have dimR(K)=m. Therefore, the selfadjointness ofK gives the decomposition

H=kerKR(K). (3.5)

From decompositions (3.3), (3.5), and the inclusion kerKR(A0), we conclude that

R(K)R(A0). (3.6)

Fix a basis{F1,F2,. . .,Fm}ofR(K). Then, for every f inH, there areαiinRsuch that K f =

m i=1

αiFi. (3.7)

(4)

Let{ψ12,. . .,ψm}be the biorthogonal family of elements ofH corresponding to the above basis ofR(K), that is, (ψi,Fj)=δi j,i,j=1,. . .,m. From (3.7), we have (K f,ψj)= (mi=1αiFij)=m

i=1αi(Fij)=αj,j=1, 2,. . .,m. Hence, K f =

m i=1

K fiFi= m i=1

f,KψiFi, f H. (3.8)

In particular, forf =ψj, we have j=

m i=1

ψj,KψiFi, or equivalently, i= m l=1

ψi,KψlFl. (3.9)

Replacing the above expression forjin (3.8), we obtain K f =

m i=1

f,

m l=1

ψi,KψlFl

Fi= m i=1

m l=1

f,Fll,ψiFi. (3.10)

IfTdenotes the matrix(Kψli)ml,i=1, then (3.10) takes the form

K f =FTf,Ft=FTFt,f. (3.11) Now, the reader can easily verify that each of the matricesTand (AF t,F) is a Hermitian matrix. We claim thatT is invertible. LetK=K|R(K)denote the restriction ofK to its range. From (3.5), it follows that kerKR(K)= {0}. Therefore, kerK= {0}. Substitut- ing f =Fjinto (3.11), we obtain

KFj=FT(Ft,Fj) or KF=FT(Ft,F). (3.12) The determinant det(Ft,F) is nonzero, being the determinant of the Gramm matrix (Ft,F) ofF. Since the vectors ofR(K)F1,F2,. . .,Fmare linearly independent and kerK= {0}, it follows that detT=0, which proves our claim.

We now prove the formula (3.1) which describes the action of the operatorBonx.

From (3.4) and (3.11), we have

Ax =f +AFT (Ft,f). (3.13)

Then, taking the inner product withFt, we get

Ax,Ft= AFT (Ft,f),Ft+f,Ft

=(Ft,AF)T (Ft,f) +f,Ft

=

f,Ft+ (Ft,AF )T(f,Ft)

=

I+ (AF t,F)T(f,Ft).

(3.14)

(5)

LetWdenote the matrixI+ (AF t,F)T. We will show that detW=0. For if detW= 0, then detWt=0. Hence, there exists a nonzero vector a=col(a1,. . .,am) such that Wta=o. We consider the linear combination f0=m

i=1aiAF i. Since the vectorsF1,. . .,Fm

are linearly independent and kerA= {0}, their imagesA(F i) underAare linearly inde- pendent as well. It follows that f0=0. Combining (3.4) and (3.11), we getx=A1f + FT(f,Ft), wherex=B1f. In particular, forx=B1f0, we compute

x0=A1f0+FTf0,Ft

=A1 m i=1

αiAF i+FT m

i=1

αiAF i,Ft

=Fa+FT m i=1

αi(Ft,AF i)

=Fa+FT m i=1

αi(AF t,Fi)

=FIa+FT(AF t,F)a

=FI+T(AF t,F)a

=FWta.

(3.15)

In the above chain of equalities, the last one follows from the definition ofW and the fact that the matricesT and (AF t,F) are Hermitian. ButWta=0. This implies that the nonzero vector f0is contained in the kernel kerB1ofB1, contradicting the correctness ofB. So detW=0. Now (3.14) gives (f,Ft)=W1(x,AF t), which with (3.13) implies

formula (3.1).

We now prove our main theorem which describes the setEcsm(A0,A) of all correct self- adjoint extensionsBof an operatorA0withD(B)=D(A) and dim R(BA) =m, using one correct selfadjoint extensionAof a minimal symmetric operatorA0with defA0≤ ∞. Every operatorB is uniquely determined by a vectorF with componentsFiD(A) R(A0),i=1,. . .,m, and a Hermitianm×mmatrixCwith rankC=nm, satisfying condition (3.16) which is the solvability condition for the problemBx= f (whose solu- tion is also given in the following result).

Theorem3.2. Suppose thatA0,Aare as inLemma 3.1. Then the following hold.

(i)For everyBEcsm(A0,A), there exists a vector F=(F1···Fm), whereF1,. . .,Fm are linearly independent elements fromD(A) R(A0)and a Hermitianm×mmatrixCwith detC=0, such that

detI(AF t,F)C=0, (3.16)

Bx=Ax AFC Ax,Ft= f . (3.17)

(ii)Conversely, for every vectorF=(F1···Fm), whereF1,. . .,Fmdefined as above, and Hermitianm×mmatrixC, which has rankC=nmand satisfies (3.16), the operatorB

(6)

defined by (3.17) belongs toEncs(A0,A). The unique solution of (3.17) is given by the formula x=B1f =A1f+FCI(AF t,F)C1f,Ft f H. (3.18) Proof. (i) LetBEmcs(A0,A). Then by Lemma 3.1, there exists a Hermitian, invertiblem× mmatrixT=(ti j), and vectorF=(F1,. . .,Fm), whereF1,. . .,Fmare linearly independent elements fromD(A) D(A0)such that detW=0 and (3.1) holds true. From (3.1), since B=B, for everyyD(B)=D(B)=D(A), we have

(Bx,y)= Ax AFT W1x,AF t,y= Ax,y AF,yT W1x,AF t

=

x,Ay x, (AF ,y)T W1 AFt=

x,Ay

y,AF TW1 AFt= x,By.

(3.19) Hence,

By=Ay

y,AF TW1 AFt=Ay AFTW1ty,AF t. (3.20) We denote byCthe matrixT W1. SinceB=B, relations (3.1), (3.20) imply that

C=T W1=

TW1t=Ct. (3.21)

Hence the matrixCis Hermitian and so (3.1) implies (3.17). The invertibility ofC is implied by the fact that T and W1 are invertible matrices. To show (3.16), we first remember that them×mmatrix (AF t,F)=D=(di j) is Hermitian. FromC=TW1, we takeT=CW=C(I+DT) orC=(ICD)T.SinceCandTare invertible, it follows that det(ICD)=0, and we finally have that det(IDC)=0, that is, (3.16) is fulfilled.

(ii) We will show thatBEncs(A0,A). We first show that Bis a correct extension ofA0. Taking into account (3.17), we have

Ft,f=

Ft,Ax AFC Ax,Ft

=

I AFt,FC(Ax,F t), (3.22) or

I(AF t,F)CAx,F t=

f,Ft. (3.23)

From (3.16), we have

Ax,Ft=

I(AF t,F)C1f,Ft. (3.24)

(7)

SinceAis invertible, (3.17) implies that

xFC Ax,Ft=A1f, f =Bx, (3.25) and because of (3.24), we have

x=A1f +FCI(AF t,F)C1f,Ft, f H, (3.26) which is (3.18).

SinceA1is continuous onH,B1 is continuous onH. From (3.18), it is clear that D(B)=D(A) D(A0). SinceA0AandFiR(A0),i=1,. . .,m, it follows from (3.17) thatBx=Ax =A0x, for allxD(A0).

So,A0Band since B1exists and is continuous onH,Bis a correct extension of A0. From (3.17), because of rankC=nandAF 1,. . .,AF m being linearly independent, it follows that dimR(BA) =n.

It remains to show thatB=B.

Taking into account (3.17) foryD(A), we have (Bx,y)= Ax,y AFC Ax,Ft,y=

x,Ay AF,yC Ax,Ft

=

x,Ay x, (AF ,y)C AFt=

x,Ay

y,AF C AFt=(x,φ). (3.27)

It follows thatyD(B) andD(A) =D(B)D(B). But foryD(A), we have By=φ=Ay

y,AF C AFt=Ay AFC Ay,Ft=B y. (3.28) HenceBB. Let nowyD(B). From (3.17), we have

(Bx,y)= Ax,y AFC Ax,Ft,y= Ax,y AF,yC Ax,Ft

= Ax, y(AF, y)CFt=

x,By. (3.29)

So,y(AF, y)CFtD(A)=D(A) =D(B) and sinceF1,. . .,FmD(A), it follows that yD(A). Hence, D(B)=D(A) =D(B) andB=B. So the theorem has been proved.

In the next particular case whenFiD(A0)R(A0),i=1,. . .,m, the condition (3.16) is fulfilled automatically and the solution ofBx=f is simpler.

(8)

Corollary3.3. For every vectorF=(F1···Fm), whereF1,. . .,Fmare linearly independent elements fromD(A0)R(A0), and for every Hermitianm×mmatrixCwith rankC= nm, the operatorBdefined by (3.17) belongs toEncs(A0,A).

The unique solution of (3.17) is given by

x=B1f =A1f+FCf,Ft, f H. (3.30) Proof. Indeed, ifFiD(A0)R(A0),i=1,. . .,m, then (AF i,Fj)=(A0Fi,Fj)=0 for all i,j=1,. . .,m, sinceFjR(A0),j=1,. . .,m. Hence (AF t,F)=0. The rest easily follows

from the above theorem.

Remark 3.4. For everyBEmcs(A0,A) from (3.2) and (3.6), we have RB1A1RA0

, dimR(BA) =mdefA0. (3.31) Let now the minimal operatorA0have finite defect defA0=dimR(A0)=m. Then D(A0) can be defined as follows:

DA0

=

xD A: Ax,Ft=0, (3.32)

whereF=(F1···Fm),F1,. . .,Fmare linearly independent elements ofR(A0)D(A). So if we have chosen the elementsF1,. . .,Fmso that (3.32) holds, then everyBfromEmcs(A0,A) is defined only by the Hermitian matrixCand we can restateTheorem 3.2as follows.

Theorem3.5. (i)For everyBEmcs(A0,A), where A0satisfies (3.32), there exists a Hermit- ianm×mmatrixCwithdetC=0, such that (3.16) and (3.17) are fulfilled.

(ii) Conversely, for every Hermitianm×mmatrixC, which satisfies (3.16) and rank C=n, the operatorBdefined by (3.17) belongs toEncs(A0,A). The unique solution of (3.17) is given by (3.18).

Proof. From (3.32), we have RA0

=

f H:f,Fi

=0,i=1,. . .,m. (3.33) It is evident that dimR(A0)=m and {F1,. . .,Fm} is a basis of R(A0). Then from dimR(A0)=m, dimR(K)=m, and (3.6), it follows that

R(K)=RB1A1=R(A0). (3.34) As basis ofR(K), we can takeF1,. . .,Fm.The rest is proved similarly.

Remark 3.6. For everyBEmcs(A0,A), where A0satisfies (3.32), we haveR(B1A1)= R(A0)and dimR(BA) =defA0.

(9)

Remark 3.7. The operators BEcsm(A0,A) in both cases of either def A0=m < or defA0= ∞are described by the same formulas (3.16) and (3.17).

Remark 3.8. LetA0be defined by (3.32) or (3.33), andF=(F1,. . .,Fm), whereF1,. . .,Fm are linearly independent elements ofR(A0)D(A). Then,

AFt,F=0⇐⇒FiD(A0), i=1,. . .,m. (3.35) Let now the minimal symmetric operatorA0be defined by

A0A, D(A0)=

xD A: Ax,Fi

=0, FiD(A0), (3.36) i=1,. . .,m, andF1,. . .,Fm are linearly independent elements ofD(A0). Then from the above remark andTheorem 3.5follows the next corollary, which describes the most “sim- ple” extensions ofA0.

Corollary3.9. (i)For everyBEmcs(A0,A), where A0satisfies (3.36), there exists a Her- mitianm×mmatrixCwithdetC=0, such that (3.17) is fulfilled.

(ii)Conversely, for every Hermitianm×mmatrixC, with rankC=nm, the operator Bdefined by (3.17) belongs toEcsn(A0,A).

The unique solution of (3.17) is given by (3.30).

The next theorem is useful for applications and gives the criterion of correctness of below problems and their solutions.

Theorem3.10. Let

Bx=Ax AFC Ax,Ft=f, D(B)=D A, (3.37) whereAas inLemma 3.1,Ca Hermitianm×mmatrix with rankC=n,F1,. . .,Fmlinearly independent elements ofD(A). Then Bis correct and selfadjoint operator withdimR(B A) =nif and only if

detI(AF t,F)C=0, (3.38)

and the unique solution of (3.37) is given by

x=B1f =A1f+FCI(AF t,F)C1f,Ft. (3.39) Proof. We define corresponding to this problem the minimal operatorA0as a restriction ofAby (3.32).

Ifn=m, then the theorem is true byTheorem 3.5.

While ifn < mandBEncs(A0,A), then from (3.37), we have Bx=f and Ft,f=

Ft,Ax

Ft,AF C(Ax,F t)

=

I AFt,FCFt,Ax (3.40)

(10)

or

I(AF t,F)CAx,F t=

f,Ft, f H. (3.41) LetL=I(AF t,F)Cand rankL=k < m. If we suppose that the firstklines of the ma- trixL are linearly independent, then for f =ψk+1, where (Fik)=δi,k,i,k=1,. . .,m, the systemL(Ax,F t)=(f,Ft) has no solution, since the rank of the augmented matrix isk+ 1=k. Then Bx=ψk+1 has no solution andR(B)=H. Consequently, Bis not a correct operator. So (3.38) holds true. Conversely, let detL=0, then byTheorem 3.5, we

have thatBEncs(A0,A).

We recall that a Hermitianm×mmatrixC=(ci j) is callednegative semidefinite (neg- ative definite)ifmi=1mj=1ξiξjci j0,

m

i=1

m j=1

ξiξjci j<0

, ξ=

ξ1,. . .,ξmCm

ξCm\{0}

. (3.42)

Theorem3.11. If inTheorem 3.2Ais positive operator andCis negative semidefinite ma- trix, thenB, defined by (3.17), is a positive operator.

Proof. We will show that (Bx,x)0 for allxD(B).

(Bx,x)= Ax AFC Ax,Ft,x= Ax,x AF,xC Ax,Ft

= Ax,x(Ax,F)C Ax,Ft= Ax,x m i=1

m j=1

(Ax,F i) Ax,Fj

ci j0, (3.43)

forCis negative and semidefinite.

We remind that an operatorA:HHis calledpositive definiteif there exists a positive real numberksuch that

Ax,xkx2, xD A. (3.44)

Theorem3.12. If the operatorAinTheorem 3.2is positive definite, then the operatorB, which is defined by the relation (3.17), is positive definite whenever the matrixCis Hermit- ian and satisfies the inequality

k >

m i=1

m j=1

AFi AFj ci j (3.45)

and positive whenkm

i=1

m

j=1AF iAF j |ci j|.

(11)

Proof. ForxD(B), we have

(Bx,x)= Ax AFC Ax,Ft,x= Ax,x(x,AF)C x,AF t

kx2 m i=1

m j=1

x,AF i

x,AF j ci j

k m i=1

m j=1

AFi AFjci j x2.

(3.46)

The theorem now easily follows.

Now we will stateTheorem 3.2, in the following more general form, which is useful in the solutions of differential equations.

Theorem3.13. Suppose thatA0,Aare as inTheorem 3.2. Then the following hold.

(i)For everyBEmcs(A0,A), there exists a vector Q=(q1,. . .,qm), whereq1,. . .,qm are linearly independent elements fromD(A0)and a Hermitian invertiblem×mmatrixC, such that

detI(Qt,A1Q)C=0, (3.47) Bx=Ax Q Cx,Qt=f, D(B)=D A. (3.48) (ii)Conversely, for every vectorQ=(q1,. . .,qm), defined as above, and Hermitianm×m matrixC, which has rankC=nand satisfies (3.47), the operatorBdefined by (3.48) belongs toEncs(A0,A).

The unique solution of (3.48) is given by the formula

x=A1f + A1QCI(Qt,A1Q)C1f,A1Qt (3.49)

for all f H.

The proof easily follows fromTheorem 3.2by substitutingQ=AF, F=A1Q, where Q=(q1,. . .,qm),qiD(A0),i=1,. . .,m.

Corollary3.14. For every vectorQ=(q1,. . .,qm), whereq1,. . .,qmare linearly indepen- dent elements ofD(A0)R(A0),i=1,. . .,m, and for every Hermitianm×mmatrixC, with rankC=n, the operatorBdefined by (3.48) belongs toEncs(A0,A)

The unique solution of (3.48) is given by the formula

x=B1f =A1f + A1QCf,A1Qt, f H. (3.50)

(12)

Let now the minimal symmetric operatorA0have finite defect and be defined by the relations

A0x=Ax, xD(A0), D(A0)=

xD(A) : (x,Q t)=0, (3.51) whereQis defined as inTheorem 3.13. Then dimD(A0)=mand defA0=dimR(A0)= m.

In this case, we restate Theorems3.5,3.11, and3.12in the following more general form.

Theorem3.15. (a)For everyBEmcs(A0,A), where A0 is defined by (3.51), there exists a Hermitianm×mmatrixCwithdetC=0, such that (3.47) and (3.48) are fulfilled.

(b)Conversely, for every Hermitianm×mmatrixC, which satisfies (3.47) and has rank C=n, the operatorBdefined by (3.48) belongs toEcsn(A0,A). The unique solution of (3.48) is given by (3.49).

(c)If the operatorAis positive and the matrixCis negative semidefinite, thenBis posi- tive.

(d)IfAis positive definite (so it satisfies a relation (3.44)) and ifCis a Hermitianm×m matrix which satisfies the inequality

k >

m i=1

m j=1

qiqjci j, (3.52)

thenBis positive definite; it is positive whenkm

i=1

m

j=1qiqj|ci j|,

Proof. Since A is selfadjoint and R(A) =H, for every xD(A), we have (x, qi)= (x,AA1qi)=(Ax,F i), whereFi=A1qi,i=1, 2,. . .,m.

It is clear thatFiD(A), i=1, 2,. . .,m, and that they are linearly independent. If we substituteQ=AF ,Qt=AF t in (3.47), (3.48), and (3.49), then we receive the relations (3.16), (3.17), and (3.18) ofTheorem 3.2, which hold true. Because of Theorems 3.11, 3.12, and the relationsqi=AF i,i=1, 2,. . .,m, cases (c) and (d) of the present theorem

are true.

Remark 3.16. Suppose that A0,A are as inTheorem 3.15and Q=(q1,. . .,qm), where q1,. . .,qmare linearly independent elements ofD(A0), then

R(A0)=

f H:f,A1qj

=0, j=1,. . .,m,

(Q,A1Q)=0⇐⇒qiR(A0), i=1,. . .,m. (3.53) Let now the minimal symmetric operatorA0be defined by the relation

A0A, D(A0)=

xD A:x,Qt=0,QR(A0)m (3.54)

参照

関連したドキュメント

This paper is a part of a project, the aim of which is to build on locally convex spaces of functions, especially on the space of real analytic functions, a theory of concrete

In this section, we are going to study how the product acts on Sobolev and Hölder spaces associated with the Dunkl operators. This could be very useful in nonlinear

Isozaki, Inverse spectral problems on hyperbolic manifolds and their applications to inverse boundary value problems in Euclidean space, Amer. Uhlmann, Hyperbolic geometry and

[2] , Metric and generalized projection operators in Banach spaces: Properties and applica- tions, Theory and Applications of Nonlinear Operators of Accretive and Monotone Type

[2] , Metric and generalized projection operators in Banach spaces: Properties and applica- tions, Theory and Applications of Nonlinear Operators of Accretive and Monotone Type

We remind that an operator T is called closed (resp. The class of the paraclosed operators is the minimal one that contains the closed operators and is stable under addition and

The aim of this paper is to present general existence principles for solving regular and singular nonlocal BVPs for second-order functional-di ff erential equations with φ- Laplacian

This will put us in a position to study the resolvent of these operators in terms of certain series expansions which arise naturally with the irrational rotation C ∗ -algebra..