• 検索結果がありません。

DETERMINATION OF A POWER DENSITY BY AN ENTROPY REGULARIZATION METHOD

N/A
N/A
Protected

Academic year: 2022

シェア "DETERMINATION OF A POWER DENSITY BY AN ENTROPY REGULARIZATION METHOD"

Copied!
26
0
0

読み込み中.... (全文を見る)

全文

(1)

DETERMINATION OF A POWER DENSITY BY AN ENTROPY REGULARIZATION METHOD

OLIVIER PROT, MA¨ITINE BERGOUNIOUX, AND JEAN GABRIEL TROTIGNON

Received 11 January 2004 and in revised form 18 September 2004

The determination of directional power density distribution of an electromagnetic wave from the electromagnetic field measurement can be expressed as an ill-posed inverse problem. We consider the resolution of this inverse problem via a maximum entropy regularization method. A finite-dimensional algorithm is derived from optimality con- ditions, and we prove its convergence. A variant of this algorithm is also studied. This second one leads to a solution which maximizes entropy in the probabilistic sense. Some numerical examples are given.

1. Introduction

In this paper, we present entropy regularization algorithms in order to determine electro- magnetic wave propagation directions from the measurement of the six components of the electromagnetic field. Most of existing methods assume that the electromagnetic wave is similar to a single plane wave. In this case, for a fixed frequency, the electromagnetic field is fully described by the wave normal direction vectork. Nevertheless, this assump- tion is generally too restrictive. For a more realistic analysis of an electromagnetic wave in a plasma, Storey and Lefeuvre have introduced the concept of wave distribution func- tion (WDF) [5]. This function describes the wave energy density distribution for every frequency and propagation modes. The WDF f is related to the spectral matrixV of the electromagnetic field component by a Fredholm integral of the first kind [5]

V(w)=

q(k,w)f(k,w)dk, (1.1)

wherekis the wave normal vector andw the frequency. In practice, the WDF is com- puted for the most significant frequency (often there is only one significant frequency) of the spectrogram. The integrating kernelqis a known function depending on the propa- gation media. The spectral matrixV is defined with the measured electromagnetic field e:R+RsbyV =e(w) ˆˆ e(w) where ˆedenotes the Fourier transform ofeandsNis the number of field components. It is a Hermitian matrix. This definition of the spectral matrix is an approximation of the real spectral matrix. Indeed,eis a random signal and

Copyright©2005 Hindawi Publishing Corporation Journal of Applied Mathematics 2005:2 (2005) 127–152 DOI:10.1155/JAM.2005.127

(2)

we should use mean values ofe. Here we assume thateis deterministic. In what follows, we will identify the spectral matrix with a vector ofCn, wheren=s2. Solutions of the inverse problem of determining f from measurements ofV were proposed by Lefeuvre using a maximum entropy method [6]. However, for a fixed frequency, we have to solve the integral equation problem

V=

q(x)f(x)dx, (1.2)

where the unknown function f is nonnegative. This inverse problem is known to be ill- posed.

The concept of WDF can be transposed to the case of electromagnetic wave propa- gating in vacuum. Equation (1.1) remains valid if we use the vacuum kernels instead of the plasma kernels and if the electromagnetic wave has a single polarization mode [7].

The use of the WDF concept for electromagnetic wave propagating in vacuum has been studied for the interpretation of ground penetrating radar investigations, and, in particu- lar, the one proposed for the NetLander mission to Mars [7]. The aim of this instrument was to explore the first kilometers of Mars subsurface to study its geological structure and look for water.

Here we use a maximum entropy regularization method to solve this problem. We minimize the quantity

V

q(x)f(x)dx

2

Cn+µH(f) (1.3)

with the constraint f 0, whereH is a negentropic term (to be defined later) andµa regularization parameter. In fact,His not the negentropy in the probabilistic sense since f is not a density. But the minimization ofHleads to a smooth solution. The main dis- advantage of the maximum entropy solution of [6] is that the constraints on the solution are too strong. The regularization process provides a relaxed problem and the error we introduce allows to search a solution in a much wider domain. Moreover, we obtain a solution not far from the data that looks like a minimizer ofH.

The maximum entropy regularization method is a useful tool to solve such inverse problems. Amato and Hughes [1] have studied the convergence of this method to show that it is a correct regularization process. This convergence is also studied in [3] by making a link with the classical Tikhonov method [11,12]. A generalization is investigated in [8].

The mathematical model of the problem is described inSection 2. We define the ne- gentropy inSection 3. InSection 4, we set the optimization problem: the feasible do- main of this problem has to be relaxed to find optimality conditions. We present two algorithms, but the solutions we obtain do not minimize negentropy. So we modify the optimization problem inSection 5. Finally, we present numerical tests inSection 6.

2. Mathematical model

In this section, we present the mathematical model. We consider a measured space (E,Ꮽ, σ), whereEis a compact subset ofRp,p1, and the measureσverifiesσ(E)<+. The power density on (E,Ꮽ) can be defined as follows.

(3)

Definition 2.1. The power density is a couple (α,m), whereαR+andmis a probability measure on (E,Ꮽ). LetAEbe the powerπAof the subsetAis given by

πA=αm(A). (2.1)

The aim of this paper is to determine a power density (α,m) which verifies the equation α

Eq dm=V, (2.2)

whereVCnis known andqL2(E,Cn,σ) is the integration kernel. In plasma physics, we have to solve this kind of problem to determine the power density distribution of an electromagnetic wavemand the total powerαfrom the measurement of the electromag- netic field components. In this case, we typically haven=36.

The set of probability measures is too large and we will only consider measures that are continuous with respect to the measureσ. We denoteH=L2(E,R,σ) (L1(E,R,σ) thanks to our assumptions). For allFHandF0σ-a.e., we can define a power den- sity (α, (F/α)dσ), whereα= FL1(E,R,σ)and (F/α)dσis the measure of densityF/αwith respect toσ. Note that F

Eαq dmwhereα= FL1(E,R,σ)and m=Fdσ/αis a linear bounded operator fromHtoCn.

More generally, we consider a linear bounded operatorψ=1,. . .,ψn) :HCn,ψ: CnH its adjoint operator and we assume that R(ψ)L(E,R,σ) (R denotes the range). We have to solve

ψ[F]=V. (2.3)

From the Riesz theorem, we deduced that there existn functions qiL2(E,C,σ), i= 1,. . .,n, such thatψi[F]= qi,FH. These functions are integration kernels, and we have

lCn, ψ(l)=Re n

i=1

liq¯i

, (2.4)

where ¯qi denotes the conjugate complex of qi. The condition R(ψ)L(E,R,σ) is equivalent to

i=1,. . .,n, qiL(E,C,σ). (2.5) The problem of solving (2.3) is an ill-posed problem. Indeed,ψ is an operator from H(an infinite-dimensional Hilbert space) to the finite-dimensional Hilbert spaceCn. So, the operatorψis not injective and there is no uniqueness of the solution (if it exists). In addition, we want to determine a solution which is also stable, that is continuous with respect to the dataV. This will be crucial for the physical and numerical stability. To deal with this challenge, we use a maximum entropy regularization method.

The principle of Tikhonov’s regularization method is to minimize the quantity

ψ[F]V2Cn+µΩ(F), (2.6)

(4)

whereµis called the regularization parameter andΩis a suitable regularizing functional.

This method is equivalent to minimizing the functionalΩon the set{ψ[F]VCn δ(µ)} [1]. In this paper, we use an entropic term as the regularizing functional (see Section 3) and we restrict the domain to the Hilbert spaceHwhereas usually, maximum entropy regularization is performed inL1. We will see in the following section that there is no problem to define the entropy. First we recall some definitions.

3. About entropy

Let the functionφ:R+Rbe defined by φ(x)=

xln(x) ifx >0,

0 else. (3.1)

The notationm1m2means that the measurem1is absolutely continuous with respect to the measurem2.

Definition 3.1. Let f,gL1(E,R,σ) such thatf dσandgdσare two probability measures and f dσgdσ. We define the relative information content of f with respect togby

I(f,g)=

Efln f

gdσ. (3.2)

If the condition f dσgdσ is not verified, thenI(f,g)=+. Ifg is the noninforma- tive probability density (see the definition below), thenI(f,g) is called the information content of f. The negentropy (negative entropy) off is then defined by

H(f)=I(f,g). (3.3)

The noninformative probability density is a known function of the model. Physically, it is the probability density of a noise measured in the system. For example, in the case of an electromagnetic wave in vacuum WDF,Eis the unit sphere. Now, since isotropy occurs (there is no privileged direction of propagation for a plane wave in vacuum), we deduce that the noninformative density probability is constant over the unit sphere. So, we may assume that the noninformative probability density is given by

xE, g(x)= 1

σ(E). (3.4)

For the sake of simplicity, we will suppose thatσ(E)=1, so the negentropy of f is H(f)=

Eφf dσ. (3.5)

Entropy of a probability density can be seen as a “distance” between f and the density of the noninformative probabilityg. To calculate the solution of the inverse problem, we minimize a functional involving negentropy. Thus, we determine the solution, which contains less information with respect to the densityg. From the physical point of view,

(5)

this allows to preserve the physical significant information. So, it is primordial to know the noninformative densityg quiet accurately. One can refer to [10] for the axiomatic derivation of the maximum entropy principle. The next lemma gives some properties ofH.

Lemma 3.2. In the sequel, denote K= {f L2(E,R,σ)| f 0 σ-a.e.}. For all f K,

e1H(f)<+. The functional H:L2(E,R,σ)Ris lower semicontinuous (l.s.c.), strictly convex onKand verifies that for allε >0, for all f Ksuch thatf ε σ-a.e., for all gK, for allλ[0, 1],

Hf+λ(gf)H(f)=λ

E

1 + lnf(x)(fg)(x)dσ(x) +o(λ). (3.6)

Proof. Let f L2(E,R,σ), H(f)=

{xE|f(x)<1}φf(x)dσ(x) +

{xE|f(x)1}φf(x)dσ(x); (3.7) as

x[0, 1], e1xlnx0,

x1, 0xlnx < x2, (3.8) so we deduce

H(f)fL2(E,R,σ)<+, H(f)≥ −e1. (3.9) The proof of the lower semicontinuity of functionalHcan be found in [1,8].

We now prove (3.6): let f Ksuch that f εletgK. We have for a.e.xE, φf(x)+λg(x)f(x)φf(x)=λ1 + lnf(x)g(x)f(x)+o(λ) (3.10) sinceφis derivable onR+andφ(x)=1 + lnx.

The functionalHis strictly convex by the strict convexity ofφonR+. 4. A penalized problem

We now define the penalized cost functional or smoothing functionalJµwe want to min- imize:

Jµ:H−→R, F−→Vψ[F]2Cn+µH(F), (4.1) whereµ >0 is a regularization parameter. We have the following lemma.

Lemma4.1. The functionalJµis l.s.c. and strictly convex onK. In addition, ifε >0, f,gK with f ε σ-a.e., then

λ]0, 1[, Jµ

f +λ(gf)Jµ(f)=λDJµ(f),gfH+o(λ), (4.2)

(6)

where

DJµ(f)=µ(1 + lnf)Vψ[f], (4.3) Jµ(g)Jµ(f) +DJµ(f),gfH. (4.4) Proof. The functionalFVψ[F]2Cnis continuous onHby continuity ofψ, hence it is l.s.c. We conclude thatJµis l.s.c. onKbyLemma 3.2. Furthermore,FVψ[F]2Cn

is Fr´echet-differentiable and its gradient is [Vψ[F]], so (4.2) is proved by Lemma 3.2as well.

The functionalJµis strictly convex onKby strict convexity ofH and by convexity of the termVψ[F]2Cn. So, we can write

Jµ

f +λ(gf)<(1λ)Jµ(f) +λJµ(g), Jµ

f +λ(gf)Jµ(f)

λ < Jµ(g)Jµ(f). (4.5) Equation(4.4) follows by taking the limit of the last equation whenλ0.

The penalized optimization problem stands as follows:

(ᏼµ)

minJµ(F), FK=

f H| f 0σ-a.e.. (4.6)

The existence of a solution to problem (ᏼµ) is not obvious since the cost functional is not coercive inH. To illustrate this fact, we give a simple counterexample: we setE= [0, 1],σ is the Lebesgue measure on [0, 1] and f :ER,xx1/2. ThenH(f)=2<

and we can build a sequence{fk}k∈NHsuch that fk f a.e., fkH+, and H(fk)H(f)<+.

Nevertheless, sinceJµis strictly convex, the solution to (ᏼµ) is unique if it exists. It is a function ofH:Fµ,V. The power density can be obtained by settingα= Fµ,VL1(E,R,σ)and m=(Fµ,V/α)dσ.

Therefore, we do not minimize the negentropy ofDefinition 3.1 sinceFµ,V is not a probability density. Rigorously, the negentropy of the solution isH(Fµ,V/α). Moreover, the cost functional does not verify (4.2) on the whole setKbecauseφis not derivable at 0. So, we have to modify this problem, taking a smaller feasible set. We study the modified problem in the next section to determine approximate first-order optimality conditions.

4.1. Relaxation of the feasible set. We just mentioned that (ᏼµ) cannot be directly solved becauseJµis not coercive and (4.2) is not satisfied. So we choose a smaller feasible set to ensure (4.2). To deal with the lack of coercivity, we bound this new domain. For 0< ε <

T <+, we set

Kε,T=

f H|ε f σ-a.e.,fHT. (4.7) It is a closed, convex subset ofH. The “relaxed” problem reads

(7)

(ᏼε,Tµ )

minJµ(F), FKε,T. (4.8)

Theorem4.2. Problem (4.8) has a unique solutionFµε,T Kε,T. A necessary and sufficient condition of optimality is

f Kε,T, DJµ

Fµε,T,fFµε,TH0, (4.9) where

DJµFµε,T= −VψFµε,T+µ1 + lnFµε,T. (4.10) Proof. The existence and uniqueness of the solution is standard, see [2, Corollary III.20, page 46]. We call itFµε,T. Then

gKε,T,λ[0, 1], JµFµε,T+λgFµε,TJµFµε,T0. (4.11) With Lemmas3.2and4.1, this is equivalent to

gKε,T, DJµ

Fµε,T,gFµε,TH0. (4.12) With the optimality condition (4.9), we may now construct the solution to (ᏼµ).

Lemma4.3. If there existsFµε,TKε,Tsuch that Fµε,T=exp

1 +2

µψVψFµε,T, (4.13) that is,Fµε,Tis a fixed point of a functional

Γµ:H−→H, F−→exp

1 +2

µψVψ[F], (4.14) then for all0< εεandTT,Fµε,Tis the unique solution of(ᏼεµ,T). FurthermoreFµε,Tis the unique solution of problem (ᏼµ) and we denote it byFµ,V.

Proof. LetFµε,TKε,T such that Fµε,T=exp

1 +2

µψVψFµε,Tσ-a.e. (4.15) We get

µlnFµε,T= −µ+ 2ψVψFµε,Tσ-a.e. (4.16) So

f Kε, VψFµε,T+µ1 + lnFµε,T,f Fµε,TH=0, (4.17)

(8)

and we see thatFµε,T verifies (4.9). Therefore,Fµε,T is the solution of (4.8). AsKε,TKε,T for all 0< εεandTT, we conclude thatFµε,Tis the solution of problem (ᏼεµ,T). It is also the solution of problem (ᏼµ). Suppose thatFKexists such thatJµ(F)Jµ(Fµε,T), then the function

F=Fµε,T+F

2 (4.18)

verifiesFε/2=εandFH(1/2)f(Fµε,TH+FH)=T, so FKε,T. SinceFµε,T is the solution of (ᏼε,µT), we haveJ(Fµε,T)J(F) and we deduce thatFµε,T is a solution of (ᏼµ). Moreover, it is the unique solution of (ᏼµ) sinceJµis strictly convex.

Lemma 4.3also shows that if the functionalΓµhas a fixed point, it is unique.

We are now able to find the solution as a fixed point. In the next subsection, we study the existence of a sequence that converges to this fixed point. That will be the essential tool to set an infinite-dimensional algorithm.

4.2. An infinite-dimensional algorithm. We define the sequence{Fk}kNofHby F0H,

Fk+1=exp

1 +2

µψVψFk. (4.19)

If it converges, the limit is a fixed point of the functionalΓµ. It is also the solution to (ᏼµ).

Lemma4.4. The functionalΓµ is continuous fromHtoL(E,R,σ). Furthermore, the in- equality

Γµ(F)L(E,R,σ)exp

1 +2

µCψVnC+CψFH (4.20) is obtained.

Proof. We remark thatFexp(F) is continuous fromL(E,R,σ) to itself andψis con- tinuous fromCntoL(E,R,σ).

Inequalities are obtained with the continuity of operatorsψ andψ: there exist two constantsCψandCψsuch that

ψ(F)CnCψFH,

ψ(x)L(E,R,σ)CψxCn. (4.21) Since the exponential function is nondecreasing, one obtains the results by injecting the

last inequalities in the expression ofΓµ.

Now we show the convergence of the sequence{Fk}kN. The following lemma gives a condition on the regularization parameterµwhich implies that the sequence{Fk}kN

stays in a ball of fixed radius.

(9)

Lemma4.5. LetFbe such thatFHR; if µ2Cψ

VCn+CψR

1 + ln(R) , (4.22)

thenΓµ(F)HRandΓµ(F)L(E,R,σ)R.

Proof. We have

µ2Cψ[VCn+CψR]

1 + ln(R) , µ[1 + ln(R)]2Cψ

VCn+CψR,

(4.23)

so

Rexp

1 +2 µCψ

VCn+CψR, (4.24)

and we deduce the two inequalities.

We will use a fixed point criterion: if the functionalΓµ is a contraction, then the se- quence is converging. In the following lemma, we give a condition onµfor the sequence to converge.

Lemma4.6. Γµis Fr´echet differentiable onHand its derivative is dΓµ(F)·h= −2

µΓµ(F)ψψ[h]. (4.25) LetF0HR. Ifµverifies (4.22) and if

µ >2R2CψCψ, (4.26)

then the sequence(Fk)k∈Nconverges inHand inL(E,R,σ)to the unique fixed pointFµ,V of Γµ.

Proof. The functionalΓµis differentiable sinceFexp(F) is differentiable fromL(E,R, σ) to itself, andFψ[ψ[F]] is linear continuous fromHtoL(E,R,σ). ByLemma 4.5, we deduce that for allkN, 0FkR. Furthermore, (4.26) leads to

sup

{F∈H|0FR}

µ(F)(L,L)<1. (4.27)

We conclude using the Banach fixed point theorem on the complete set{FH|0F R}with the distance induced byL. The sequence converges to the unique fixed point Fµ,V∈ {FH|0FR}ofΓµstrongly inL(E,R,σ) and inH(by compactness).

We may summarize in the following theorem.

(10)

Theorem4.7. Ifµverifies (4.22) and (4.26) (i.e.,µlarge enough), then problem (ᏼµ) has a unique solutionFµ,Vlimit of the sequence{Fk}k∈Ndefined by

F0

FH|0FR, Fk+1=exp

1 +2

µψVψFk

=Γµ

Fk

. (4.28)

The convergence stands inLand inH.

Shortly speaking, we have an infinite-dimensional algorithm that converges to the so- lution of the maximum entropy regularization problem for a fixed parameterµ great enough.Theorem 4.7shows that the solutionFµ,V of (ᏼµ) is obtained as the limit of the sequence{Fk}kNand belongs toL(E,R,σ).

However this algorithm is not useful from the numerical point of view. Indeed, it is an infinite-dimensional one and an inappropriate discretization process may lead to slow computations. Nevertheless, we are able to derive a finite-dimensional algorithm from the optimality condition (4.9). This is the aim of the next subsection.

4.3. A finite-dimensional algorithm. Lemma 4.3suggests to look for the solutionFµ,V

of problem (ᏼµ) asFµ,V=Gµ,V, where Gµ,V=exp

1 +2 µψ[λ]

, (4.29)

whereλCnhas to be determined. The next lemma gives a sufficient optimality condi- tion onλto solve (ᏼµ). In addition, we have an analytic expression for the solution.

Lemma4.8. LetλCnsuch that λ=V

Eq(σ) exp

1 +2

µψ[λ](σ)

dσ, (4.30)

then the functionGµ,VH, defined by (4.29), is the unique solution of (ᏼµ).

Proof. By the definition ofGµ,Vand with the assumptionR(ψ)L(E,R,σ), there exist ε >0 andT > εsuch thatGµ,VKε,T. Writing the expression ofFµ,V in (4.9), we can see that it is verified. SoFµ,V is the unique solution of problem (4.8). WithLemma 4.3and the strict convexity ofJµ, it follows that it is the unique solution of (ᏼµ).

Therefore, we only need to find the value ofλCnto determineGµ,V. So the problem turns to be afinite-dimensionalone. We define the sequence{λk}k∈NofCnas

λ0Cn, λk+1=γµ

λk

def

=V

Eq(σ) exp

1 +2 µψλk

(σ)

dσ. (4.31)

The functionγµ:CnCndefined in (4.31) is differentiable and its derivative is µ(λ)·h= −2

µ

Eq(σ) ¯qt(σ) exp

1 +2

µψ[λ](σ)

·h dσ; (4.32)

(11)

thus

µ

λk

·hCn2

µρ(k)µ hCn, (4.33) whereρµ(k)is the spectral radius of matrixMµ(k)of dimension (n,n) such that for all 1 i,jn(we use the Euclidean norm onCn),

Mµ,i j(k) =

Eqi(σ) ¯qj(σ) exp

1 +2

µψλk(σ)

dσ. (4.34)

Using the Frobenius norm of matrixMµ(k), we have the inequality ρ(k)µ Mµ(k)Frexp

1 +2

µCψλkCn

1i,jn

Eqi(σ) ¯qj(σ)dσ

2

. (4.35)

The sequence {λk}kN converges to the fixed point of γµ only if ρµ(k) is small enough for any k. So we cannot use it to calculate λ. However, we are able to construct an- other sequence converging toλ noting that for allτ >0,λ=γµ(λ) is equivalent toλ= λτ[λγµ(λ)]. So we can obtainλas limit of the sequence{lk}k∈N

l0Cn, lk+1=lkτlkγµ

lk

. (4.36)

This sequence will be used to determinate the solution practically. Ifτ is chosen small enough andµis great enough, it converges.

Lemma4.9. Assume that for everykN, the spectral radius ofMµ(k)is less thanmµR+; then the sequencelkconverges if 0< τ <1/(1 + (2/µ)mµ).

Proof. The matrixMµ(k)is Hermitian of nonnegative type since for allσE,q(σ) ¯qt(σ) is nonnegative Hermitian and exp(1 + (2/µ)ψk](σ))>0. Lethτ:CnCnbe the func- tionλλτ[λγµ(λ)]. The derivative ofhτis

λ,vCn, dhτ(λ)·v=

[1τ]I2 µτMµ

v. (4.37)

SinceM(k)µ is Hermitian, there exists an orthogonal basis of eigenvectors and we callB(k) the transition matrix. SoB(k) ([1τ]I(2/µ)τMµ(k))B(k)=[1τ]I(2/µ)τ∆(k)µ , where (2/µ)τ∆(k)µ is a diagonal matrix with positive elements. Thus, the spectral radius of [1 τ]I(2/µ)τMµ(k)is strictly less than 1τsince the spectral radius ofMµ(k)is less thanmµ

and 0< τ <1/(1 + (2/µ)mµ). We deduce

lk+1lkCn(1τ)IτMµ(k)lklk1

Cn

(1τ)lklk1

Cn

(4.38)

becauseA2=ρ(AA). So the sequence converges.

(12)

We have a proof of the convergency of sequence{lk}k∈Nif the spectral radius of the matrix M(k)µ is uniformly bounded with respect to k. In the next lemma, we give an estimate of the spectral radius ofM(k)µwith the Frobenius norm.

Lemma4.10. Letl0CnR; if µ2CψR

log

RVCn

Cψ

+ 1

1

, (4.39)

whereCψ >0verifies

ψ(F)CnCψFL(E,R,σ), (4.40) then the whole sequence is bounded byR, and the spectral radiusρ(k)µ ofMµ(k)satisfies

ρ(k)µ exp

1 +2 µCψR

1i,jn

Eqi(σ) ¯qj(σ)dσ

2

. (4.41)

Proof. We prove the result by induction. We assume that there existskNsuch that for all jk,ljCnR, then

lk+1

Cn(1τ)R+τ

VCn+Cψexp

1 +2 µCψR

(4.42) and condition (4.39) implies

VCn+Cψexp

1 +2 µCψR

R. (4.43)

For allkN, we havelkRand (4.41) is a direct consequence of (4.35).

Proposition4.11. If(µ,R)satisfies condition (4.39), thenγµhas a unique fixed pointλin the closed ballB(0,R). Moreover, ifl0CnRandτis small enough, thenλis the limit of the sequence{lk}k∈N.

We now give a more precise description of the algorithm defined by (4.36).

Algorithm 4.12.

(1) Initialization. GivenVCn, choosel0Cn,µ >0,>0,τ]0, 1].

(2) Iterationk. (a) Compute γµ

lk1

=V

Eq(σ) exp

1 +2 µψlk1

(σ)

dσ. (4.44)

(b) Computelk=lk1τ[lk1γµ(lk1)].

(3) Stopping criterion. If|lklk1|<, then STOP, elsek:=k+ 1 and go to (2).

The algorithm converges if the regularization parameter is great enough. The main advantage of this method is that it determines a vector ofCnwhich is the fixed point of

(13)

a functional. Moreover, we have an analytic expression for this solution. The convergence of the algorithm is linear since we have shown thatlk+1lkCn(1τ)lklk1Cn in Lemma 4.9. The numberτhas to be chosen as great as possible for a faster convergence.

Now we perform a sensitivity analysis of the optimal value function with respect toµ andV. LetVCnbe fixed. We suppose that for the dataV, the sequence (4.31) converges for allµµo>0. We define the functionC1by

C1:µo, +

−→R, µ−→Jµ

Fµ,V

+µe1. (4.45)

Similarly for anyλ >0 fixed, we suppose that the sequence (4.31) converges for allV B(0,R), whereR >0, and we define the functionC2by

C2:B(0,R)Cn−→R, V−→JµFµ,V+µe1. (4.46) Proposition4.13. LetC1andC2be the functions defined by (4.45) and (4.46).

(1)C1is continuous and increasing.

(2)C2is continuous and verifies

C2V2Cn+µ

e. (4.47)

Proof. Letµ1> µ2µo. For allFK,Jµ1(F)> Jµ2(F) since (H(F) +e1)0, henceC1is increasing. To prove continuity, we suppose there exists a sequence{µk}k∈Nand aδ >0 such thatµkµand for allk,|C1k)C1(µ)|> δ. Letε >0 be small enough, so there existsNNsuch that|µµN|< δ(µε)/2C1(µ) and|µµk|< ε. Assume thatµN< µ, then

JµFµN,VC1

µN=

µµNHFµN,V (4.48) so|Jµ(FµN,V)C1N)|< δ/2 sinceH(FµN,V)C1(µ)/µN. Hence,Jµ(FµN,V)<C1(µ). This contradiction proves the result. The caseµNµcan be shown similarly.

We can show the continuity ofC2by the same way. The inequality (4.47) is obtained

by takingF=0.

We just proved the fixed point existence ifµis large enough; in fact, it is true for any µ >0. We use a scaling method to prove it.

Proposition 4.14. For every µ >0, the solutionFµ of problem (ᏼµ) exists and verifies (4.13). Moreover,Fµcan be computed with the sequence{k}n∈Ndefined by

oCn, k+1 =

1τo

α

k+τoγµok (4.49) for someµo> µ >0,τo]0, 1[andα=µo/µ.

Proof. WithProposition 4.11, we know that there existoCn,µo>0 and 0< τo<1 such that the sequence{k}k∈Ndefined by (4.36) converges. Then the solutionFµ of problem (ᏼµ) exists for everyµµo.

参照

関連したドキュメント

Keywords: continuous time random walk, Brownian motion, collision time, skew Young tableaux, tandem queue.. AMS 2000 Subject Classification: Primary:

A new method is suggested for obtaining the exact and numerical solutions of the initial-boundary value problem for a nonlinear parabolic type equation in the domain with the

Then it follows immediately from a suitable version of “Hensel’s Lemma” [cf., e.g., the argument of [4], Lemma 2.1] that S may be obtained, as the notation suggests, as the m A

Applications of msets in Logic Programming languages is found to over- come “computational inefficiency” inherent in otherwise situation, especially in solving a sweep of

It is known that if the Dirichlet problem for the Laplace equation is considered in a 2D domain bounded by sufficiently smooth closed curves, and if the function specified in the

Shi, “The essential norm of a composition operator on the Bloch space in polydiscs,” Chinese Journal of Contemporary Mathematics, vol. Chen, “Weighted composition operators from Fp,

[2])) and will not be repeated here. As had been mentioned there, the only feasible way in which the problem of a system of charged particles and, in particular, of ionic solutions

This paper presents an investigation into the mechanics of this specific problem and develops an analytical approach that accounts for the effects of geometrical and material data on