• 検索結果がありません。

JJ II

N/A
N/A
Protected

Academic year: 2022

シェア "JJ II"

Copied!
24
0
0

読み込み中.... (全文を見る)

全文

(1)

volume 5, issue 4, article 101, 2004.

Received 29 March, 2004;

accepted 01 November, 2004.

Communicated by:A.M. Rubinov

Abstract Contents

JJ II

J I

Home Page Go Back

Close Quit

Journal of Inequalities in Pure and Applied Mathematics

CHARACTERIZATIONS OF CONVEX VECTOR FUNCTIONS AND OPTIMIZATION

CLAUDIO CUSANO, MATTEO FINI AND DAVIDE LA TORRE

Dipartimento di Informatica Università di Milano Bicocca Milano, Italy.

EMail:cusano@disco.unimib.it

Dipartimento di Economia Politica e Aziendale Università di Milano

Milano, Italy.

EMail:matteo.fini@unimib.it EMail:davide.latorre@unimi.it

c

2000Victoria University ISSN (electronic): 1443-5756 067-04

(2)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page2of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Abstract

In this paper we characterize nonsmooth convex vector functions by first and second order generalized derivatives. We also prove optimality conditions for convex vector problems involving nonsmooth data.

2000 Mathematics Subject Classification:90C29, 90C30, 26A24 Key words: Nonsmooth optimization, Vector optimization, Convexity

Contents

1 Introduction. . . 3 2 A First Order Generalized Derivative for Vector Functions . . 6 3 A Parabolic Second Order Generalized Derivative for Vec-

tor Functions . . . 10 4 Characterizations of Convex Vector Functions . . . 15 5 Optimality Conditions . . . 17

References

(3)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page3of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

1. Introduction

Letf : Rn →Rm be a given vector function and C ⊂ Rmbe a pointed closed convex cone. We say thatf isC-convex if

f(tx+ (1−t)y)−tf(x)−(1−t)f(y)∈C

for all x, y ∈ Rn and t ∈ (0,1). The notion of C-convexity has been studied by many authors because this plays a crucial role in vector optimization (see [4, 11, 13, 14] and the references therein). In this paper we prove first and second order characterizations of nonsmooth C-convex functions by first and second order generalized derivatives and we use these results in order to obtain optimality criteria for vector problems.

The notions of local minimum point and local weak minimum point are re- called in the following definition.

Definition 1.1. A point x0 ∈ Rn is called a local minimum point (local weak minimum point) of (VO) if there exists a neighbourhood U of x0 such that no x∈U ∩Xsatisfiesf(x0)−f(x)∈C\{0}(f(x0)−f(x)∈intC).

A function f : Rn → Rm is said to be locally Lipschitz at x0 ∈ Rn if there exist a constant Kx0 and a neighbourhood U of x0 such that kf(x1)− f(x2)k ≤ Kx0kx1 −x2k, ∀x1, x2 ∈ U. By Rademacher’s theorem, a locally Lipschitz function is differentiable almost everywhere (in the sense of Lebesgue measure). Then the generalized Jacobian off atx0, denoted by∂f(x0), exists and is given by

∂f(x0) := cl conv{lim ∇f(xk) :xk →x0,∇f(xk)exists}

(4)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page4of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

wherecl conv{. . .}stands for the closed convex hull of the set under the paren- theses. Now assume thatf is a differentiable vector function from Rm toRn; if∇f is locally Lipschitz atx0, the generalized Hessian off atx0, denoted by

2f(x0), is defined as

2f(x0) := cl conv{lim ∇2f(xk) :xk →x0,∇2f(xk)exists}.

Thus∂2f(x0)is a subset of the finite dimensional spaceL(Rm;L(Rm;Rn))of linear operators fromRm to the space L(Rm;Rn)of linear operators fromRm toRn. The elements of∂2f(x0)can therefore be viewed as bilinear function on Rm×Rm with values inRn. For the casen = 1, the terminology "generalized Hessian matrix" was used in [10] to denote the set ∂2f(x0). By the previous construction, the second order subdifferential enjoys all properties of the gener- alized Jacobian. For instance,∂2f(x0)is a nonempty convex compact set of the space L(Rm;L(Rm;Rn)) and the set valued mapx 7→ ∂2f(x)is upper semi- continuous. Letu ∈ Rm; in the following we will denote byLuthe value of a linear operator L: Rm → Rnat the pointu∈ Rm and byH(u, v)the value of a bilinear operatorH :Rm×Rm →Rnat the point(u, v)∈Rm×Rm. So we will set

∂f(x0)(u) ={Lu:L∈∂f(x0)}

and

2f(x0)(u, v) ={H(u, v) :H ∈∂2f(x0)}.

Some important properties are listed in the following ([9]).

• Mean value theorem. Let f be a locally Lipschitz function and a, b ∈ Rm.Then

f(b)−f(a)∈cl conv{∂f(x)(b−a) :x∈[a, b]}

(5)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page5of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

where[a, b] = conv{a, b}.

• Taylor expansion. Let f be a differentiable function. If ∇f is locally Lipschitz anda, b∈Rmthen

f(b)−f(a)∈ ∇f(a)(b−a) +1

2cl conv{∂2f(x)(b−a, b−a) :x∈[a, b]}.

(6)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page6of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

2. A First Order Generalized Derivative for Vector Functions

Let f : Rn → R be a given function and x0 ∈ Rn. For such a function, the definition of Dini generalized derivativef0D atx0 in the directionu∈Rnis

f0D(x0;u) = lim sup

s↓0

f(x0+su)−f(x0)

s .

Now let f : Rn → Rm be a vector function and x0 ∈ Rn. We can define a generalized derivative atx0 ∈Rnin the sense of Dini as follows

fD0 (x0;u) =

l = lim

k→+∞

f(x0+sku)−f(x0)

sk , sk↓0

.

The previous set can be empty; however, if f is locally Lipschitz at x0 then f0(x0;u)is a nonempty compact subset ofRm. The following lemma states the relations between the scalar and the vector case.

Remark 2.1. Iff(x) = (f1(x), . . . , fm(x))then from the previous definition it is not difficult to prove that

fD0 (x0;u)⊂(f1)0D(x0;u)× · · · ×(fm)0(x0;u).

We now show that this inclusion may be strict.

Let us consider the functionf(x) = (xsin(x−1), xcos(x−1)); for it we have fD0 (0; 1)⊂ {d∈R2 :kdk= 1}

while

(f1)0D(0; 1) = (f2)0D(0; 1) = [−1,1].

(7)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page7of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Lemma 2.1. Let f : Rn → Rm be a given locally Lipschitz vector function at x0 ∈Rn. Then,∀ξ∈Rm, we haveξf0D(x0;u)∈ξfD0 (x0;u).

Proof. There exists a sequencesk ↓0such that the following holds ξf0D(x0;u) = lim sup

s↓0

(ξf)(x0+su)−(ξf)(x0) s

= lim

k→+∞

(ξf)(x0+sku)−(ξf)(x0)

sk .

By trivial calculations and eventually by extracting subsequences, we obtain

=

m

X

i=1

ξi lim

k→+∞

fi(x0+sku)−fi(x0)

sk =

m

X

i=1

ξil =ξl withl∈fD0 (x0;u)and thenξf0D(x0;u)∈ξfD0 (x0;u).

Corollary 2.2. Letf :Rn →Rm be a differentiable function atx0 ∈Rn. Then fD0 (x0;u) = ∇f(x0)u,∀u∈Rn.

We now prove a generalized mean value theorem forfD0 .

Lemma 2.3. [6] Letf :Rn →Rbe a locally Lipschitz function. Then∀a, b∈ Rn,∃α∈[a, b]such that

f(b)−f(a)≤f0D(α;b−a).

Theorem 2.4. Let f : Rn → Rm be a locally Lipschitz vector function. Then the following generalized mean value theorem holds

0∈f(b)−f(a)−cl conv {fD0 (x;b−a) :x∈[a, b]}.

(8)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page8of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Proof. For eachξ∈Rmwe have

(ξf)(b)−(ξf)(a)≤ξf0D(α;b−a) =ξlξ, lξ ∈fD0 (α;b−a), whereα∈[a, b]and then

ξ(f(b)−f(a)−lξ)≤0, lξ ∈fD0 (α;b−a)

ξ(f(b)−f(a)−cl conv {fD0 (x;b−a) :x∈[a, b]})∩R 6=∅, ∀ξ∈Rm and a classical separation separation theorem implies

0∈f(b)−f(a)−cl conv {fD0 (x;b−a) :x∈[a, b]}.

Theorem 2.5. Let f : Rn → Rm be a locally Lipschitz vector function atx0. ThenfD0 (x0;u)⊂∂f(x0)(u).

Proof. Letl∈fD0 (x0;u). Then there exists a sequencesk↓0such that l = lim

k→+∞

f(x0+sku)−f(x0)

sk .

So, by the upper semicontinuity of∂f, we have f(x0+sku)−f(x0)

sk ∈cl conv {∂f(x)(u);x∈[x0, x0+sku]}

⊂∂f(x0)(u) +B,

whereB is the unit ball ofRm,∀n ≥ n0(). So l ∈ ∂f(x0)u+B. Taking the limit when →0, we obtainl ∈∂f(x0)(u).

(9)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page9of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Example 2.1. Let f : R → R2, f(x) = (x2sin(x−1) +x2, x2). f is locally Lipschitz atx0 = 0andfD0 (0; 1) = (0,0)∈∂f(0)(1) = [−1,1]× {0}.

(10)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page10of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

3. A Parabolic Second Order Generalized Derivative for Vector Functions

In this section we introduce a second order generalized derivative for differen- tiable functions. We consider a very different kind of approach, relying on the Kuratowski limit. It can be considered somehow a global one, since set-valued directional derivatives of vector-valued functions are introduced without relying on components. Unlike the first order case, there is not a common agreement on which is the most appropriate second order incremental ratio; in this section the choice goes to the second order parabolic ratio

h2f(x, t, w, d) = 2t−2[f(x+td+ 2−1t2w)−f(x)−t∇f(x)·d]

introduced in [1]. In fact, iff is twice differentiable atx0, then h2f(x, tk, w, d)→ ∇f(x)·w+∇2f(x)(d, d)

for any sequence tk ↓ 0. Just supposing thatf is differentiable at x0, we can introduce the following second order set–valued directional derivative in the same fashion as the first order one.

Definition 3.1. Letf :Rn →Rmbe a differentiable vector function atx0 ∈Rn. The second order parabolic set valued derivative of f at the point x0 in the directionsd, w∈Rnis defined as

D2f(x0)(d, w)

= (

l :l = lim

k→+∞2f(x0+tkd+t22kw)−f(x0)−tk∇f(x0)d

t2k , tk↓0

) .

(11)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page11of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

This notion generalizes to the vector case the notion of parabolic derivative introduced by Ben-Tal and Zowe in [1]. The following result states some prop- erties of the parabolic derivative.

Proposition 3.1. Supposef = (φ1, φ2)withφi :Rn→Rmi,m1+m2 =m.

• D2f(x0)(w, d)⊆D2φ1(x0)(w, d)×D2φ2(x0)(w, d).

Ifφ2is twice differentiable atx0, then

D2f(x0)(w, d) =D2φ1(x0)(w, d)× {∇φ2(x0)·w+∇2φ2(x0)(d, d)}.

Proof. Trivial.

The following example shows that the inclusion in(i)can be strict.

Example 3.1. Consider the functionf :R→R2,f(x) = (φ1(x), φ2(x)), φ1(x) =

( x2sin ln|x|, x6= 0

0, x= 0

φ2(x) =

( −x2sin3ln|x|(cosx−2), x6= 0

0, x= 0

It is easy to check that∇f(0) = (0,0)and

D21, φ2)(0)(d, w)⊂ {l = (l1, l2), l1l2 ≤0} ∩ −intR2+=∅, D2φ1(0)(d, w) =D2φ2(0)(d, w) = [−2d2,2d2]

and this shows thatD21, φ2)(0)(d, w)6=D2f1(0)(w, d)×D2f2(0)(d, w).

(12)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page12of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Proposition 3.2. Suppose f is differentiable in a neighbourhood ofx0 ∈ Rn. Then, the equality

D2f(x0)(w, d) =∇f(x0)·w+∂2f(x0)(d, d)

holds for any d, w ∈ Rn, where2f(x0)(d, d) denotes the set of all cluster points of the sequences{2t−2k [f(x0 +tkd)−f(x0)−tk∇f(x0)·d]}such that tk↓0.

Proof. Trivial.

Proposition 3.3. D2f(x0)(w, d)⊆ ∇f(x0)·w+∂2f(x0)(d, d).

Proof. Let z ∈ D2f(x0)(w, d). Then, we haveh2f(x0, tk, w, d) → z for some suitabletk ↓0. Let us introduce the two sequences

ak= 2t−2k [f(x0+tkd+ 2−1t2kw)−f(x0+tkd)]

and

bk = 2t−2k [f(x0+tkd)−f(x0)−tk∇f(x0)·d]

such that h2f(x0, tk, w, d) = ak +bk. Since f is differentiable near x0, then ak converges to ∇f(x0)·w and thus bk converges toz1 = z − ∇f(x0)·w.

Therefore, the thesis follows ifz1 ∈ ∂2f(x0)(d, d). Given anyθ ∈ Rm, let us introduce the functions

φ1(t) = 2t−2[(θ·f)(x0+td)−(θ·f)(x0)−t∇(θ·f)(x0)·d], φ2(t) =t2,

(13)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page13of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

where(θ·f)(x) = θ·f(x). Thus, we have θ·bk= [φ1(tk)−φ1(0)]

2(tk)−φ2(0)] = φ01k) φ02k)

for someξk∈[0, tk]. Since this sequence converges toθ·z1, we also have

k→+∞lim

φ01k)

φ02k) =θ· lim

k→+∞k−1[∇f(x0kd)− ∇f(x0)]·d}=θ·zθ for somezθ ∈∂2f(x0)(d, d). Hence the above argument implies that given any θ ∈ Rm we have θ ·(z1 −zθ) = 0 for some zθ ∈ ∂2f(x0)(d, d). Since the generalized Hessian is a compact convex set, then the strict separation theorem implies thatz1 ∈∂2f(x0)(d, d).

The following example shows that the above inclusion may be strict.

Example 3.2. Consider the function

f(x1, x2) = [max{0, x1+x2}]2, x22 .

Then, easy calculations show thatf is differentiable with∇f1(x1, x2) = (0,0) whenever x2 = −x1 and ∇f2(x1, x2) = (0,2x2). Moreover ∇f is locally Lipschitz near x0 = (0,0)and actually f is twice differentiable at anyx with x2 6=−x1

2f1(x)(d, d) =

( 2(d21+d22) if x1 +x2 >0 0 if x1 +x2 <0

(14)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page14of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

and2f2(x)(d, d) = 2d22. Therefore, we have

2f(x0)(d, d) =

2α d21+d22 ,2d22

: α∈[0,1] .

On the contrary, it is easy to check thatD2f(x0)(w, d) ={(2(d21+d22),2d22)}.

(15)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page15of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

4. Characterizations of Convex Vector Functions

Theorem 4.1. Iff :Rn →RmisC-convex then

f(x)−f(x0)∈fD0 (x0, x−x0) +C for allx∈Rn.

Proof. Sincef isC-convex at x0 then it is locally Lipschitz atx0 [12]. For all x∈Rnwe have

t(f(x)−f(x0))∈f(tx+ (1−t)x0)−f(x0) +C

Letl ∈ fD0 (x0;x−x0); then there existstk↓0such that f(x0+tk(x−xt 0))−f(x0)

k

dand

f(x)−f(x0)∈ f(tk(x−x0) +x0)−f(x0)

tk +C.

Taking the limit whenk →+∞this impliesf(x)−f(x0)∈fD0 (x0, x−x0) + C

Corollary 4.2. Iff :Rn →RmisC-convex and differentiable atx0then f(x)−f(x0)∈ ∇f(x0)(x−x0) +C

for allx∈Rn.

The following result characterizes the convexity off in terms ofD2f.

(16)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page16of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Theorem 4.3. Let f : Rn → Rm be a differentiable C-convex function at x0 ∈Rn. Then we have

D2f(x0)(x−x0,0)⊂C for allx∈Rn.

Proof. If D2f(x0)(x−x0,0) is empty the thesis is trivial. Otherwise, let l ∈ D2f(x0)(x−x0,0). Then there existstk ↓0such that

l = lim

k→+∞

f(x0+tk(x−x0))−f(x0)−tk∇f(x0)(x−x0) t2k

Sincef is a differentiableC-convex function thenf(x0+tk(x−x0))−f(x0)−

tk∇f(x0)(x−x0)∈C and this implies the thesis.

(17)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page17of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

5. Optimality Conditions

We are now interested in proving optimality conditions for the problem minx∈X f(x)

whereX is a given subset ofRn. The following definition states some notions of local approximation ofXatx0 ∈clX.

Definition 5.1.

The cone of feasible directions ofX atx0 is set:

F(X, x0) ={d∈Rn: ∃α >0s.t.x0+td ∈X,∀t≤α}

The cone of weak feasible directions ofXatx0 is the set:

W F(X, x0) = {d∈Rn: ∃tk ↓0s.t.x0+tkd∈X}

The contingent cone ofXatx0is the set:

T(X, x0) :={w∈Rn : ∃wk→w, ∃tk ↓0s.t.x0+tkwk∈X}.

The second order contingent set ofX atx0 in the directiond ∈ Rn is the set:

T2(X, x0, d)

:={w∈Rn : ∃tk ↓0, ∃wk→w s.t. x0+tkd+ 2−1t2kwk ∈X}.

(18)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page18of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

The lower second order contingent set ofX atx0 ∈ clX in the direction d∈Rnis the set:

Tii(X, x0, d)

:={w∈Rn : ∀tk ↓0, ∃wk→w s.t. x0+tkd+ 2−1t2kwk ∈X}.

Theorem 5.1. Letx0be a local weak minimum point. Then for alld∈F(X, x0) we have

fD0 (x0;d)∩ −intC =∅.

If∇f is locally Lipschtz atx0 then, for alld∈W F(X, x0), we have fD0 (x0;d)∩(−intC)c 6=∅.

Proof. IffD0 (x0;d)is empty then the thesis is trivial. Ifl ∈fD0 (x0;d)∩ −intC thenl = limk→+∞f(x0+tkd)−f(x0)

tk andf(x0 +tkd)−f(x0) ∈ −intC for allk sufficiently large. Suppose now thatf is locally Lipschitz. In this casefD0 (x0;d) is nonempty for alld∈Rn. Ab absurdo, supposefD0 (x0;d)⊂ −intCfor some d ∈ W F(X, x0). Let xk = x0 +tkd be a sequence such that xk ∈ X; by extracting subsequences, we have

l = lim

k→+∞

f(x0+tkd)−f(x0) tk

andl∈fD0 (x0;d)⊂ −intC. SinceintC is open fork"large enough" we have f(x0+tkd)∈f(x0)−intC.

(19)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page19of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Theorem 5.2. If x0 ∈ X is a local vector weak minimum point, then for each d ∈D(f, x0)∩T(X, x0)the condition

(5.1) D2f(x0)(d+w, d)∩ −intC=∅

holds for any w∈ Tii(X, x0, d). Furthermore, if∇f is locally Lipschitz atx0, then the condition

(5.2) D2f(x0)(d+w, d)*−intC

holds for anyd∈D(f, x0)∩T(X, x0)and anyw∈T2(X, x0, d).

Proof. Ab absurdo, suppose there exist suitable d and w such that (5.1) does not hold. Then, given any z ∈ D2f(x0)(d +w, d)∩ −intC, there exists a sequence tk ↓ 0 such that h2f(x0, tk, d+w, d) → z. By the definition of the lower second order contingent set there exists also a sequence wk → w such that xk = x0 +tkd+ 2−1t2kwk ∈ X. Introducing also the sequence of points ˆ

xk =x0+tkd+ 2−1t2k(d+w), we have both f(xk)−f(ˆxk) = 2−1t2kh

∇f(ˆxk)·(wk−w−d) +ε(1)k i

withε(1)k →0and

f(ˆxk)−f(x0) = tk∇f(x0)·d+ 2−1t2kh

z+ε(2)k i

(20)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page20of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

withε(2)k →0. Therefore, we have f(xk)−f(x0) =tk

(1−2−1tk)∇f(x0)·d + 2−1tkh

∇f(ˆxk)·(wk−w) +z+ε(1)k(2)k i .

Since

k→∞lim

h∇f(ˆxk)·(wk−w) +z+ε(1)k(2)k i

=z ∈ −intC and

(1−2−1tk)∇f(x0)·d∈ −C, forklarge enough we have

f(xk)−f(x0)∈ −(C+ intC) = −intC in contradiction with the optimality ofx0.

Analogously, suppose there exist suitable dand w such that (5.2) does not hold. By the definition of the second order contingent cone, there exist se- quences tk ↓ 0and wk → wsuch that x0 +tkd+ 2−1t2kwk ∈ X. Taking the suitable subsequence, we can suppose that h2f(x0, tk, d+w, d) → z for some z ∈ C. Then, we have z ∈ D2f(x0)(d +w, d) ⊆ −intC and we achieve a contradiction just as in the previous case.

The following example shows that the previous second order condition is not sufficient for the optimality ofx.¯

(21)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page21of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Example 5.1. SupposeC =R2+andf :R3 →R2 with

f1(x1, x2, x3) =x21+ 2x32−x3, f2(x1, x2, x3) = x32−x3, X =

x∈R3 :x21 ≤4x3 ≤2x21, x21+x32 ≥0 . Choosing the pointx0 = (0,0,0), we have

T2(X, x0, d) =

( R×R×[2−1d21, d21] ifd2 = 0

ifd2 6= 0 for any nonzerod∈T(X, x0)∩D(f, x0) =R×R× {0}. Therefore

D2f(x0)(d+w, d) = (−w3+ 2d21,−w3)∩ −intR2+ =∅

for anyw∈T2(X, x0, d). However,x0 is not a local weak minimum point since both f1 and f2 are negative along the curve described by the feasible points xt = (t3,−t2,2−1t6)fort6= 0.

There are at least two good explanations for such a fact. The second order contingent sets may be empty and the corresponding optimality conditions are meaningless in such a case, since they are obviously satisfied by any objective function. Furthermore, there is no convincing reason why it should be enough to test optimality only along parabolic curves, as the above example corroborates.

The following result states a sufficient condition for the optimality ofx0 when f is a convex function.

Definition 5.2. A subset X ⊂Rnis said to be star shaped atx0if[x0, x]⊂X for allx∈X.

(22)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page22of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

Theorem 5.3. LetXbe a star shaped set atx0. IffisC-convex andfD0 (x0;x−

x0)⊂(−intC)c,x∈X, thenx0 is a weak minimum point.

Proof. We have,∀x∈X,

f(x)−f(x0)∈fD0 (x0, x−x0) +C ⊂(−intC)c+C ⊂(−intC)c and this implies the thesis.

(23)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page23of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

References

[1] A. BEN-TAL AND J. ZOWE, Directional derivatives in nonsmooth opti- mization, J. Optim. Theory Appl., 47(4) (1985), 483–490.

[2] S. BOLINTINANU, Approximate efficiency and scalar stationarity in un- bounded nonsmooth convex vector optimization problems, J. Optim. The- ory Appl., 106(2) (2000), 265–296.

[3] J.M. BORWEINANDD.M. ZHUANG, Super efficiency in convex vector optimization, Z. Oper. Res., 35(3) (1991), 175–184.

[4] S. DENG, Characterizations of the nonemptiness and compactness of so- lution sets in convex vector optimization, J. Optim. Theory Appl., 96(1) (1998), 123–131.

[5] S. DENG, On approximate solutions in convex vector optimization, SIAM J. Control Optim., 35(6) (1997), 2128–2136.

[6] W.E. DIEWERT, Alternative characterizations of six kinds of quasicon- cavity in the nondifferentiable case with applications to nonsmooth pro- gramming, Generalized Concavity in Optimization and Economics (S.

Schaible and W.T. Ziemba eds.), Academic Press, New York, 1981, 51–

95.

[7] B.M. GLOVER, Generalized convexity in nondifferentiable programming, Bull. Austral. Math. Soc., 30 (1984), 193–218.

[8] V. GOROKHOVIK, Convex and nonsmooth problems of vector optimiza- tion, (Russian) "Navuka i Tkhnika", Minsk, 1990.

(24)

Characterizations of Convex Vector Functions and

Optimization

Claudio Cusano, Matteo Fini and Davide La Torre

Title Page Contents

JJ II

J I

Go Back Close

Quit Page24of24

J. Ineq. Pure and Appl. Math. 5(4) Art. 101, 2004

http://jipam.vu.edu.au

[9] A. GUERRAGGIOANDD.T. LUC, Optimality conditions forC1,1 vector optimization problems, J. Optim. Theory Appl., 109(3) (2001), 615–629.

[10] J.B. HIRIART-URRUTY, J.J. STRODIOTANDV.H. NGUYEN, General- ized Hessian matrix and second order optimality conditions for problems withC1,1data, Appl. Math. Optim., 11 (1984), 43–56.

[11] X.X. HUANG AND X.Q. YANG, Characterizations of nonemptiness and compactness of the set of weakly efficient solutions for convex vector opti- mization and applications, J. Math. Anal. Appl., 264(2) (2001), 270–287.

[12] D.T. LUC, N.X. TANANDP.N. TINH, Convex vector functions and their subdifferentials, Acta Math. Vietnam., 23(1) (1998), 107–127.

[13] P.N. TINH, D.T. LUC ANDN.X. TAN, Subdifferential characterization of quasiconvex and convex vector functions, Vietnam J. Math., 26(1) (1998), 53–69.

[14] K. WINKLER, Characterizations of efficient points in convex vector opti- mization problems, Math. Methods Oper. Res., 53(2) (2001), 205–214.

参照

関連したドキュメント

In order to prove these theorems, we need rather technical results on local uniqueness and nonuniqueness (and existence, as well) of solutions to the initial value problem for

In this note, we consider a second order multivalued iterative equation, and the result on decreasing solutions is given.. Equation (1) has been studied extensively on the

We consider the global existence and asymptotic behavior of solution of second-order nonlinear impulsive differential equations.. 2000 Mathematics

Specifically, restricting attention to traveling wave solutions of the relaxation model (1.3), the first-order approximation (1.4), and the associated second-order approximation

In order to eliminate these drawbacks of Chakraborty’s theory, Raman and Venkatanarasaiah [6] have presented a nonlinear diffraction theory due to the Stokes second-order waves

In recent years, singular second order ordinary differential equations with dependence on the first order derivative have been studied extensively, see for example [1-8] and

We use subfunctions and superfunctions to derive su ffi cient conditions for the existence of extremal solutions to initial value problems for ordinary differential equations

In this paper, we prove some explicit upper bounds for the average order of the generalized divisor function, and, according to an idea of Lenstra, we use them to obtain bounds for