• 検索結果がありません。

On Approximate Solutions for Robust Convex Optimization Problems (Nonlinear Analysis and Convex Analysis)

N/A
N/A
Protected

Academic year: 2021

シェア "On Approximate Solutions for Robust Convex Optimization Problems (Nonlinear Analysis and Convex Analysis)"

Copied!
9
0
0

読み込み中.... (全文を見る)

全文

(1)8. 数理解析研究所講究録 第2011巻 2016年 8-16. On for Robust. Approximate Solutions Convex optimization Problems Jae. Hyoung. Lee. Department of Applied Mathematics, Pukyong National University Gue. Myung. Lee. Department of Applied Mathematics, Pukyong National University Abstract We review ometric. a robust convex optimization problem with a ge‐ approximate constraint, which is the face of data uncertainty. In this review, we notice that using robust. our. results for. solutions for. optimization approach(worst‐case approach), rems for approximate solutions for the robust the. optimality and duality results problem with uncertainty data.. for the. get an optimality theorem and duality theo‐ optimization problem, and that we can extend optimization problem to a fractional optimization. we can convex. convex. Key words. robust convex optimization problem, robust fractional programming, approximate‐ solution, robust optimization approach, approximate‐optimality conditions, approximateduality the orems.. AMS. 1. subject. classification 90\mathrm{C}25. 90\mathrm{C}32. 90\mathrm{C}46.. Introduction. Robust convex optimization problems are to optimize convex optimization problems with data uncer‐ tainty (incomplete data) by using the worst‐case approach. Here, uncertainty means that input parameter of these problems are not known exactly at the time when solution has to be determined [3]. The study of convex programs that are affected by data uncertainty ([1234,591012]) is Uecoming increasingly important in optimization. Recently, the duality theory for convex programs under uncertainty via robust approach(worst‐case approach) have been studied ([1101112 It was shown that primal worst equals dual best ([11011 A standard form of convex optimization problem ([615]) with a geometric constraint set is as follows:. (CP). f(x). \displaystyle \min s.t.. g_{i}(x)\leqq 0i=1,. m. \cdots. x\in C, where. fg_{i}. :. \mathbb{R}^{n}\rightarrow \mathbb{R}i=1m. are convex. functions and C is. a. closed. convex cone. of \mathbb{R}^{n}..

(2) 9. The convex optimization problem (CP) captured by the problem. (UCP). in the face of data. uncertainty. in the constraints. can. be. f(x). \displaystyle \min. g_{i}(x, v_{i})\leqq 0, i=1, \cdots, m,. s.t.. x\in C, \mathbb{R}^{n}\times \mathbb{R}^{\mathrm{q} \rightar ow \mathbb{R}, g_{i}(\cdot, v_{i}) is convex and v_{i}\in \mathbb{R}^{\mathrm{q} is an uncertain parameter which belongs to the \mathcal{V}_{i}\subset \mathbb{R}^{q}, i=1, \cdots, m. We study an approximate optimality theorem and approximate duality theorem for the uncertain convex optimization problem (UCP) by examining its robust (worst‐case) counterpart ([3]) where g_{i}. :. set. (RUCP). f(x). \displaystyle \min. g_{i}(x, v_{i})\leqq 0, \forall v_{i}\in \mathcal{V}_{i}, i=1,. s.t.. \cdots,. m,. x\in C. : \mathbb{R}^{n}\times \mathbb{R}^{\mathrm{q} \rightar ow \mathbb{R}, g_{i} vĩ) is convex and v_{i}\in \mathbb{R}^{q} is the uncertain parameter which belongs to the \mathcal{V}_{i}\subset \mathbb{R}^{q}, i=1, \cdots, m Cleaxly, A:=\{x\in C|g_{i}(x, v_{i})\leqq 0, \forall v_{i}\in \mathcal{V}_{i}, i=1, \cdots, m\} is the feasible. where g_{i} set. .. set of. (RUCP).. Let. $\epsilon$\geqq 0. .. Then \overline{x} is called. an. approximate solution of (RUCP) if for. any. x\in A,. f(x)\geqq f(\overline{x})- $\epsilon$. Recently, Jeyakumar. and Li. [10]. has showed that when C=\mathbb{R}^{n} and $\epsilon$=0 ,. Lagrangian strong duality optimistic counterpart for robust convex optimization problem in the face of data uncertainty via robust optimization under a new robust characteristic cone constraint qualification (RCCCQ) that holds between. \mathrm{s}. robust counterpart and. an. \displaystyle\cup\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}(\cdot,v_{i}) ^{*}. v_{1}\in \mathcal{V}_{i},$\lambda$_{i}\geqq 0 is. and closed.. convex. In this paper,. we. give approximate optimality theorem for (RUCP) under the following. constraint. qualification:. \displayst le\bigcup_{v.\in\mathcal{V}_{\dot{\mathrm{a} ,$\lambda$_{\mathrm{d}\geq 0}\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}(\cdot,v_{i})^{*}+C^{*}\times\mathb {R}_{+} is. and closed.. convex. for the primal. one. approximate solutions of (RUCP), we formulate a Wolfe type dual problem give approximate weak duality and approximate strong duality between the Wolfe type dual problem, which hold under a weakened constraint qualification.. For. and. primal problem and its Moreover, we notice that we can extend the optimality and duality results for (RUCP) optimization problem with uncertainty data.. 2. to. a. fractional. Definitions and Notations Let. first recall. us. some. definitions and notations which will be used. the Euclidean space with dimension. throughout. this paper. \mathbb{R}^{n} denotes. The nonnegative orthant of \mathbb{R}^{n} is denoted :=\{(x_{1}, \cdots, x_{n})\in \mathbb{R}^{n} : x_{i}\geqq 0\} The inner product in \mathbb{R}^{n} is defined by. by \mathb {R}_{+}^{n} x, y\in \mathbb{R}^{n} We .. n. .. .. say the set A is. convex. whenever. $\mu$ a_{1}+(1- $\mu$)a_{2}\in A. for all. by \mathb {R}_{+}^{n} y}. \langle x, $\mu$\in[0 1 ], ,. and is defined. :=x^{T}y. a_{1},. for all. a_{2}\in A Let ..

(3) 10. be. f. a. function ffom \mathbb{R}^{n} to \overline{\mathb {R} , where. f(x)>-\infty. \overline{\mathbb{R}}=[-\infty, +\infty]. .. Here, f. f(x_{0})\in \mathbb{R}. and there exists x_{0}\in \mathbb{R}^{n} such that. domf =\{x\in \mathbb{R}^{n}|f(x)<+\infty\} The epigraph of f epif, r\} and f is said to be convex if for all $\mu$\in[0 1 ], .. ,. ,. .. is said to be proper if for all. We denote the domain of is defined. as. f by. x\in \mathbb{R}^{n}, is,. dom f , that. epi f=\{(x, r)\in \mathbb{R}^{n}\times \mathbb{R}|f(x)\leqq. ,. f((1- $\mu$)x+ $\mu$ y)\leqq(1- $\mu$)f(x)+ $\mu$ f(y) for all x, y\in \mathbb{R}^{n} , equivalently epif is convex. The function f is said to be concave whenever -f is convex. Let g:\mathbb{R}^{n}\rightarrow \mathbb{R}\cup\{+\infty\} be a convex function. The (convex) subdifferential of f at x\in \mathbb{R}^{n} is defined by. \partialf(x)=\left\{ begin{ar ay}{l \{x^{*}\in\mathb {R}^{n}|\langlex^{*},y-x\rangle\leqf(y)-f(x),\foral y\in\mathb {R}^{n}\,\mathrm{i}\mathrm{f}x\in\mathrm{d}\mathrm{o}\mathrm{m}f,\ \emptyset,\mathrm{o}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}. \end{ar ay}\right. More. generally, for. any $\epsilon$\geq 0 , the. $\epsilon$ ‐subdifferential. of. at x\in \mathbb{R}^{n} is defined. f. by. \partial_{$\epsilon$}f(x)=\left\{ begin{ar ay}{l \{x^{*}\in\mathb {R}^{n}|\langlex^{*},y-x)\leqf(y)-f(x)+$\epsilon$,\foral y\in\mathb {R}^{n}\,\mathrm{i}\mathrm{f}x\in\mathrm{d}\mathrm{o}\mathrm{m}f,\ \emptyset,\mathrm{o}\mathrm{t}\mathrm{h}\mathrm{e}\mathrm{r}\mathrm{w}\mathrm{i}\mathrm{s}\mathrm{e}. \end{ar ay}\right. As usual, for any proper convex function g on \mathbb{R}^{n} its conjugate function g^{*}:\mathbb{R}^{n}\rightarrow \mathbb{R}\cup\{+\infty\} is defined for any x^{*}\in \mathbb{R}^{n}, g^{*}(x^{*})=\displaystyle \sup\{\langle x^{*}, x)-g(x)|x\in \mathbb{R}^{n}\} For details of conjugate function, see [15]. Given a set A\subset \mathbb{R}^{n} , we denote the closure, the convex hull, and the conical hull generated by A , by \mathrm{c}\mathrm{l}\mathrm{A}, ,. by. .. \mathrm{c}\mathrm{o}A , and. coneA, respectively.. The normal. cone. N_{C}(x). to C at. N_{C}(x)= {v\in \mathbb{R}^{n}|\langle v, y-x\rangle\leqq 0 and let. $\epsilon$\geqq 0. ,. then the. $\epsilon$‐normal. set. N_{C}^{ $\epsilon$}(x). to C at. x. ,. 3. a. closed. convex cone. in \mathbb{R}^{n} ,. we. denote. Approximate Optimality Slightly extending. constraint, convex. we can. Theorem 2.4 in. obtain the. functions in. [10]. following. to. N_{C}(0) by. ,. is defined. for all. is defined. N_{C}^{ $\epsilon$}(x)= {v\in \mathbb{R}^{n}|\langle v, y-x\rangle\leqq $\epsilon$ When C is. x. by. y\in C },. by. for all. y\in C }.. C^{*} and call it the negative dual. cone. of C.. Theorem a. robust. lemma in. [12],. convex. optimization problem with. a. geometric. which is the robust version of Farkas Lemma for. [8]:. m be [12] Let f : \mathbb{R}^{n}\rightarrow \mathbb{R} be a convex hnction and let g_{i} : \mathbb{R}^{n}\times \mathbb{R}^{q}\rightar ow \mathbb{R}, i=1 functions such that for each v_{i}\in \mathbb{R}^{q}, g_{i}(\cdot, v_{i}) is a convex function. Let C be a closed convex cone of \mathbb{R}^{n} Let \mathcal{V}_{i}\subseteq \mathbb{R}^{q}, i=1 m and let A :=\{x\in C|g_{i}(x, v_{i})\leqq 0, \forall v_{i}\in \mathcal{V}_{i}, i=1, . .. , m\}\neq\emptyset. Then the following statements are equivalent:. Lemma 3.1.. ,. continuous. .. ,. \cdots. ,. ,. (i) \{x\in C|g_{i}(x, v_{i})\leqq 0, \forall v_{i}\in \mathcal{V}_{i}, i=1, . .., m\}\subseteq\{x\in \mathbb{R}^{n}|f(x)\geqq 0\} ; (ii) (0,0)\in epi f^{*}+\mathrm{c}1 co. Using. Lemma. 3.1,. (\displayst le\bigcup_{:v_{i}\n\mathcal{V}_{i},$\lambda$\geq 0}\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}(\cdot,v_{i})^{*}+C^{*}\times\mathb {R}_{+}). we can. obtain the. following. theorem in. [12]:. .. \cdots. ,. ,.

(4) 11. [12] Let v_{i}\in \mathbb{R}^{q}, g_{i}(\cdot, v_{i}). Theorem 3.1. each. for C^{*}\times \mathbb{R}_{+}. \overline{ $\lambda$}_{i}\geq 0. is closed and. and. \overline{v}_{i}\in \mathcal{V}_{i},. and let g_{i}. x. is. convex.. i=1 ,. :. \mathbb{R}^{n}\times \mathbb{R}^{q}\rightar ow \mathbb{R},. ... .. ,. i=1 ,. \cdots. be continuous. m,. ,. m,. .. for. such that. functions such. $\lambda$_{:}\geq 0^{\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}. for each fixed v_{i}\in \mathcal{V}_{i} Suppose that \displaystyle \bigcup_{v_{i}\in \mathcal{V}_{i} , Then \overline{x} is an approximate solution of (RUCP). convex. if. and. only if. ihat. v_{i} ) )^{*}+. there exist. x\in C,. any. f(x)+\displaystyle\sum_{i=1}^{m}\overline{$\lambda$}_{i}g_{i}(x,\overline{v}_{i})\geq f(\overline{x})-$\epsilon$. Using Lemma 3.1, solution of. (RUCP). .. .. Suppose. .. ,. that. statements. (i). be continuous. m,. \overline{x} is. following approximate optimality. theorem for. approximate. [12].. functions. such that. for. each. Let \overline{x}\in A and let g_{i}. v_{i}\in \mathbb{R}^{q}, g_{i}(\cdot, v_{i}). \displaystyle\bigcup_{v_{\dot{\mathrm{a} \in\mathcal{V}_{i}, $\lambda$_{i}\displaystyle \geqq 0i=1\mathrm{e}\mathrm{p}\mathrm{i}(\sum^{m}$\lambda$_{i}g_{i}(\cdot, v_{i}) ^{*}+C^{*}\times \mathbb{R}+. are. an. obtain the. can. [12] (Approximate Optimality theorem). Theorem 3.2. i=1 ,. we. which is in. is. is closed and. convex. \mathbb{R}^{n}\mathrm{x}\mathbb{R}^{\mathrm{q} \rightar ow \mathbb{R}, fixed v_{i}\in \mathcal{V}_{i}. Then the following. for. convex.. :. each. equivalent:. approximate solution of (RUCP);. \displaystyle \mathrm{e}\mathrm{p}\mathrm{i}(\sum$\lambda$_{i}g_{i}m v_{i}) ^{*}+C^{*}\times \mathb {R}_{+}. \cup. (ii) (0, $\epsilon$-f(\overline{x}))\in epi f^{*}+. ;. v:\in V_{i}, $\lambda$_{i}\geqq 0 i=1. (iii). \overline{v}_{i}\in V_{i}, \overline{ $\lambda$}_{i}\geqq 0, i=1. There exist. ,. .. .. .. ,. m,. and. $\epsilon$_{i}\geqq 0,. i=0 ,. 1,. \cdots. m+1 such that. ,. 0\displayst le\in\partial_{$\epsilon$_{0}f(\overline{x})+\sum_{i=1}^{m}\partial_{$\epsilon$_{i}(\overline{$\lambda$}_{i}g_{i}(\overline{x},\overline{v}_{i})+N_{C}^{$\epsilon$_{m+1}(\overline{x}). \displayst le\sum_{i=0}^{m+1}$\epsilon$_{i}-$\epsilon$=\sum_{i=1}^{m}\overline{$\lambda$}_{ig_{i}(\overline{x},\overline{v}_{i). and. As usual. So,. we. program, the dual problem of (RUCP) is sometimes dual problem (RLD) for (RUCP) as follows([12]):. convex. formulate. a. (RLD). treatable than. f(x)+\displaystyle \sum_{i=1}^{m}$\lambda$_{i}g_{i}(x, v_{i}) 0\displaystyle\in\partial_{$\epsilon$_{\mathrm{O} f(x)+\sum_{i=1}^{m}\partial_{$\epsilon$_{*}^{-}$\lambda$_{i}g_{i}(x,v_{i})+N_{C}^{$\epsilon$_{m+}. \mathrm{M}\mathrm{a}\mathrm{x}\mathrm{i}\mathrm{m}\mathrm{i}\mathrm{z}\mathrm{e}_{(x,v $\lambda$)} subject. more. to. $\lambda$_{i}\geqq 0, v_{i}\in \mathcal{V}_{i},. i=1 ,. ... .. ,. ’. (x). (RUCP).. ,. m,. \displaystle\sum_{i=0}^{m+1}$\epsilon$_{i}\leq $\epsilon$. When $\epsilon$=0 , and. g_{i}(x, v_{i})=g_{i}(x) (D) for (CP) as ,. Wolfe dual problem. (D). i=1 ,. \cdots. ,. m,. (RUCP). becomes. (CP),. and. (RLD) collapses. to the. follows:. \mathrm{M}\mathrm{a}\mathrm{x}\mathrm{i}\mathrm{m}\mathrm{i}\mathrm{z}\mathrm{e}_{(x, $\lambda$)} subject. to. f(x)+\displaystyle \sum_{i=1}^{m}$\lambda$_{i}g_{i}(x) \displaystyle \nabla f(x)+\sum_{i=1}^{m}\nabla$\lambda$_{i}g_{i}(x)+N_{C}(x)=0,. $\lambda$_{i}\geqq 0, i=1, \cdots , m. Now,. (RUCP). we. and. prove. (RLD). approximate weak and approximate strong duality theorems which hold between which. are. in. [12]..

(5) 12. Theorem 3.3.. [12] (Approximate. Weak Duality. Theorem). For any feasible. x. of (RUCP). and any. feasible (y, v, $\lambda$) of (RLD),. f(x)\displaystyle \geq f(y)+\sum_{i=1}^{m}$\lambda$_{i}g_{i}(y, v_{i})- $\epsilon$. Theorem 3.4. be continuous. m, [12] (Approximate Strong Duality Theorem) Let g_{i} : \mathbb{R}^{n}\times \mathbb{R}^{q}\rightar ow \mathbb{R}, i=1 such that for each v_{i}\in \mathbb{R}^{q}, g_{i}(\cdot, v_{i}) is a convex for each fixed v_{i}\in \mathcal{V}_{i} Suppose ,. functions. .. .. .. ,. .. that. \displaystyle\cup\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}(\cdot,v_{i}) ^{*}+C^{*}\times\mathb {R}_{+}. v_{i}\in \mathcal{V}_{l}, $\lambda$.\geqq 0 is closed.. If \overline{x}\dot{u}. (\overline{x},\overline{v},\overline{ $\lambda$}). a. is. an. approximate solution of (RUCP), then there. 2 $\epsilon$ ‐solution. Robust Fractional. 4. evist. of (RLD).. optimization. \overline{ $\lambda$}\in \mathb {R}_{+}^{m}. and \overline{v}\in \mathbb{R}^{q} such that. Problem. In this. chapter, we notice that we can extend the optimality and duality results for the convex optimization problem to a fractional optimization problem with uncertainty data. Consider the following standard form of fractional optimization problem with a geometric constraint set:. (FP). \displayst le\frac{f(x)}{g(x)}. min. h_{4}(x)\leqq 0,. s.t.. i=1 ,. ,. m,. x\in C, where. f, h_{i} : \mathbb{R}^{n}\rightar ow \mathbb{R}, i=1,. functions, C is a closed convex cone of \mathbb{R}^{n} and g:\mathbb{R}^{n}\rightarrow \mathbb{R} x\in C, f(x)\geqq 0 and g(x)>0. The fractional optimization problem (FP) in the face of data uncertainty in the constraints can be captured by the problem: is. a concave. \cdots. ,. m , are convex. function such that for any. (UFP). \displaystyle \min s.t.. \displaystyle \max. \underline{f(x,u)}. (u,v\rangle\in \mathcal{U}\times vg(x, v) h_{i}(x, w_{i})\leqq 0, i=1. ,. ,. m,. x\in C,. f : \mathbb{R}^{n}\times \mathbb{R}^{p}\rightar ow \mathbb{R}, h_{i}:\mathbb{R}^{n}\times \mathbb{R}^{q}\rightarrow \mathbb{R}, f(\cdot, u) and h_{i}(\cdot, w_{i}) are convex, and g:\mathbb{R}^{n}\times \mathbb{R}^{p}\rightarrow \mathbb{R}, g(\cdot, v) u\in \mathbb{R}^{p}, v\in \mathbb{R}^{p} and w_{i}\in \mathbb{R}^{\mathrm{q} are uncertain parameters which belongs to the convex and compact uncertainty sets \mathcal{U}\subset \mathbb{R}^{p}, \mathcal{V}\subset \mathbb{R}^{\mathrm{p} and \mathcal{W}_{i}\subset \mathbb{R}^{q}, i=1, \cdots, m respectively. We study approximate optimality theorems and approximate duality theorems for the uncertain frac‐ tional optimization problem (UFP) by examining its robust (worst‐case) counterpart ([3]): where. is concave, and. ,. (RFP). min s.t.. f(x, u). \displaystyle \max_{(u,v)\in \mathcal{U}\times vg(x,v)}. h_{ $\eta$}(x, w_{i})\leqq 0, \forall w_{i}\in \mathcal{W}_{i}, i=1, \cdots, m, x\in C..

(6) 13. Clearly, A:=\{x\in C|h_{i}(x, w_{i})\leqq 0, \forall w_{i}\in \mathcal{W}_{i}, i=1, . . . , m\} is the feasible set of (RFP). Let $\epsilon$\geqq 0 Then \overline{x} is called an approximate solution of (RFP) if for any x\in A, .. \displaystyle \max \underline{f(x,u)}\geqq \max f(\overline{x}, u)_{- $\epsilon$}.. (u,v)\in u\times vg(x, v) (u,v)\in u)(vg(\overline{x}, v). Using parametric approach, we transform the problem (RFP) into the robust non‐fractional optimization problem (RNCP) with a parameter r\in \mathbb{R}_{+} :. (RNCP). \displaystyle \min. \displaystyle \max_{u\in \mathcal{U} f(x, u)-r\min_{v\in \mathcal{V} g(x, v). s.t.. h_{i}(x, w_{i})\leqq 0, \forall w_{i}\in \mathcal{W}_{i}, i=1,. \cdots. ,. convex. m,. x\in C.. $\epsilon$\geqq 0. Let. Then \overline{x} is called. .. an. approximate solution of (RNCP). if for any. x\in A,. \displaystyle \max_{u\in \mathcal{U} f(x, u)-r\min_{v\in \mathcal{V} g(x, v)\geqq\max_{u\in \mathcal{U} f(\overline{x}, u)-r\min_{v\in \mathcal{V} g(\overline{x}, v)- $\epsilon$. Now in. we. give the following relation between approximate solution of (RFP) and ( RNCP), which is. [13]. [13]. Lemma 4.1.. Let \overline{x}\in A and let. If. equivalent:. (i). \overline{x}\dot{u}. an. approximate solution of (RFP);. (ii). \overline{x} is. an. \overline{$\epsilon$} ‐solution. From Lemma. 4.1,. of (RNCP),. we can. \displaystyle\max_{(u,v)\in\mathcal{U}\timesv}\frac{f(\overline{x},u)}{g(\overline{x},v)}-$\epsilon$\geq 0. \displaystyle\overline{r}=\max_{(u,v)\in\mathcal{U}\times\mathcal{V}\frac{[(\overline{x},\mathrm{u}){g(\overline{x},v)}-$\epsilon$. where. get the following theorem in. [13]. with. ,. then the. and. a. following. \displaystyle \overline{ $\epsilon$}= $\epsilon$\min_{v\in \mathcal{V} g(\overline{x}, v). statements. are. .. similar way to Theorem 3.1.. [13] Let f : \mathbb{R}^{n}\times \mathbb{R}^{p}\rightarrow \mathbb{R} and h_{i} : \mathbb{R}^{n}\times R^{q}\rightarrow \mathbb{R} be functions such that for any u\in \mathcal{U}_{f} m are convex functions, and for any x\in \mathbb{R}^{n}, for each w_{i}\in \mathcal{W}_{i}, h_{i}(\cdot, w_{i}) i=1 f(x, \cdot) is concave function. Let g:\mathbb{R}^{n}\times \mathbb{R}^{p}\rightarrow \mathbb{R} be a junction such that for any v\in \mathcal{V}, g(\cdot, v) is a concave function, and for all x\in \mathbb{R}^{n}, g(x, \cdot) is a convex function. Let \mathcal{U}\subset \mathbb{R}^{p}, \mathcal{V}\subset \mathbb{R}^{p} and \mathcal{W}_{i}\subset \mathbb{R}^{q}, i=1 m Let. Theorem 4.1.. f(\cdot, u). and. ,. ,. .. .. .. ,. ,. ,. \overline{x}\in A and let and. (i) (ü). convex.. \overline{x} is. an. \displaystyle\overline{r}=\max_{(u,v)\in\mathcal{U}\mathrm{x}\mathcal{V}\frac{f(\overline{x},u)}{g(\overline{x},v)}-$\epsilon$. Then the. following. .. Suppose. statements. are. that. .. ... \displaystyle\bigcup_{w_{\dot{\mathrm{c} \in\mathcal{W}_{i}$\lambda$_{i}\geq 0},\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}h_{i}(\cdot,w_{i})^{*}+C^{*}\times\mathb {R}_{+}. ,. .. is closed. equivalent:. approximate solution of (RFP);. there exist \overline{u}\in \mathcal{U}, \overline{v}\in \mathcal{V}, \overline{w}_{i}\in \mathcal{W}_{i} and. \overline{ $\lambda$}_{i}\geq 0,. i=1 ,. .. ... ,. m,. such that. for. any. x\in C,. f(x,\displaystyle\overline{u})-\overline{r}g(x,\overline{v})+\sum_{i=1}^{m}\overline{$\lambda$}_{i}h_{i}(x,\overline{w}_{i})\geq 0. Using Lemma 4.1,. we can. establish. for the robust fractional optimization. approximate optimality theorems ([13]) for approximate solutions. problem. with. a. similar way to Theorem 3.2.. [13] (Approximate Optimality theorem) Let f : \mathbb{R}^{n}\times \mathbb{R}^{p}\rightarrow \mathbb{R} and h_{i} : \mathbb{R}^{n}\times \mathbb{R}^{\mathrm{q} \rightar ow \mathbb{R} m are convex for any u\in \mathcal{U}, f(\cdot, u) and for each w_{i}\in \mathcal{W}_{i}, h_{i}(\cdot, w_{i}) i=1 functions functions. Let g:\mathbb{R}^{n}\times \mathbb{R}^{\mathrm{p} \rightar ow \mathbb{R} be a function such that for any v\in \mathcal{V}, g(\cdot, v) is a concave function. Let Theorem 4.2. be. such that. \mathcal{U}\subset \mathbb{R}^{\mathrm{p} , \mathcal{V}\subset \mathbb{R}^{p}. and. ,. \mathcal{W}_{i}\subset \mathbb{R}^{\mathrm{q} ,. i=1 ,. \cdots. ,. m. .. Let \overline{x}\in A and let. $\epsilon$\geqq O.. Let. ,. .. .. .. ,. ,. \displaystyle\overline{r}=\max_{(u,v)\in\mathcal{U}\times\mathcal{V} \frac{f(\overline{x},u)}{g(\overline{x},v)}-$\epsilon$..

(7) 14. If. \displaystyle\max_{(u,v)\in\mathcal{U}\mathrm{x}\mathcal{V}\frac{f(\overline{x},u)}{g(\overline{x},v)}<$\epsilon$. ,. then \overline{x} is. an. approximate solution of (RFP).. \displaystyle\bigcup_{w_{i}\in\mathcal{W}_{l},$\lambda$_{i}\geq 0}\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{l}\primeh_{i}(\cdot,w_{i}) ^{*}+C^{*}\times\mathb {R}+is. closed and convex, then the. If. \displaystyle\max_{(u,v)\in\mathcal{U}\times\mathcal{V}\frac{f(\overline{x},u)}{g(\overline{x},v)}\geq $\epsilon$. following. statements. and. equiv‐. are. alent:. (i). \overline{x} is. (ii). an. approximate solution of (RFP);. Theoe exist. \overline{w}_{i}\in \mathcal{W}_{i} and. \overline{ $\lambda$}_{i}\geq 0,. i=1 ,. .. .. .. ,. m,. $\epsilon$_{0}^{1}\geqq 0, $\epsilon$_{0}^{2}\geqq 0. and. $\epsilon$_{i}\geqq 0,. i=1 ,. .. .. .. ,. m+1 such that. 0\displaystyle\in\partial_{$\epsilon$_{0}^{1}(\max_{u\in\mathcal{U}f(\cdot,u)(\overline{x})+\partial_{$\epsilon$_{0}^{2}(-\overline{r}\min_{v\in\mathcal{V}g(\cdot,v)(\overline{x})+\sum_{i=1}^{m}\partial_{$\epsilon$i}(\overline{$\lambda$}_{i}h_{i}(\cdot,\overline{w}_{i})(\overline{x}) +N_{C}^{$\epsilon$_{m+1}}(\overline{x}). ,. \displaystyle \max_{u\in \mathcal{U} f(\overline{x}, u)-\overline{r}\min_{v\in \mathcal{V} g(\overline{x}, v)= $\epsilon$\min_{v\in \mathcal{V} g(\overline{x}, v). and. $\epsilon$_{0}^{1}+$\epsilon$_{0}^{2}+\displayst le\sum_{i=1}^{m+1}$\epsilon$_{i}-$\epsilon$\min_{v\in\mathcal{V}g(\overline{x},v)=\sum_{i=1}^{m}\overline{$\lambda$}_{i}h_{i}(\overline{x},\overline{w}_{i}). .. x\in \mathbb{R}^{n}, f(x, \cdot) is concave, and for all x\in \mathbb{R}, g(x, \cdot) is convex, then using Lemma 4.1, following characterization of approximate solution for (RFP) which is in [13].. If for all obtain the. we can. [13] (Approximate Optimality theorem) Let f : \mathbb{R}^{n}\times \mathbb{R}^{p}\rightarrow \mathbb{R} and h_{i}:\mathbb{R}^{n}\times \mathbb{R}^{\mathrm{q} \rightar ow \mathbb{R}, functions such that for any u\in \mathcal{U}, f(\cdot, u) and for each w_{i}\in \mathcal{W}_{i}, h_{ $\eta$}(\cdot, w_{i}) are convex functions, and for all x\in \mathbb{R}^{n}, f(x, \cdot) is concave junction. Let g : \mathbb{R}^{n}\times \mathbb{R}^{\mathrm{p} \rightar ow \mathbb{R} be a function such that for any v\in \mathcal{V}, g(\cdot, v) is concave, and for all x\in \mathbb{R}, g(x, \cdot) is convex. Let \mathcal{U}\subset \mathbb{R}^{p}, V\subset \mathbb{R}^{p} and \mathcal{W}_{i}\subset \mathbb{R}^{q}, Theorem 4.3.. i=1 ,. i=1 ,. ... ... .. .. ,. ,. m,. m. .. be. \displayst le\overline{r}=\max_{($\tau\iota$,v)\in\mathcal{U}\mathrm{x}\mathcal{V}\frac{f(\overline{x},\mathrm{u}){g(\overline{x},v)}-$\epsilon$ If \displaystyle\max_{(u,v)\in\mathcal{U}\mathrm{x}\mathcal{V}\frac{f(\overline{x},u)}{g(\overline{x},v)}<$\epsilon$ of (RFP). If (,v)\displaystyle\in\mathcal{U}\times\mathcal{V}\max_{14}\frac{f(\overline{x},u)}{g(\overline{x},v)}\geq $\epsilon$ \displayst le\bigcup_{w_{\mathrm{i} \in\mathcal{W}_{\mathrm{i} $\lambda$_{*}\geq 0},\cdot\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}h_{i}(\cdot,w_{i}) ^{*}+C^{*}\times\mathb {R}_{+}. Let \overline{x}\in A and let. approximate solution. (i) (ii). an. .. Let. .. ,. then. x. and. closed and convex, then the \overline{x} is. $\epsilon$\geqq 0. following. approximate solution of. statements. are. an. is. equivalent.. (RFP);. There exist \overline{u}\in \mathcal{U}, \overline{v}\in \mathcal{V}, \overline{w}_{i}\in \mathcal{W}_{i},. \overline{ $\lambda$}_{i}\geq 0,. i=1 ,. ... .. ,. m,. $\epsilon$_{0}^{1}\geqq 0, $\epsilon$_{0}^{2}\geqq 0. and. $\epsilon$_{i}\geqq 0,. such that. 0\displaystyle\in\partial_{$\epsilon$_{0}^{1}(f\cdot,\overline{u})(\overline{x})+\partial_{$\epsilon$_{0}^{2}(-\overline{r}g(\cdot,\overline{v})(\overline{x})+\sum_{i=1}^{m}\partial_{$\epsilon$:}(\overline{$\lambda$}_{i}h_{i}(\cdot,\overline{w}_{i})(\overline{x}) +N_{C}^{$\epsilon$_{m+1}}(\overline{x}). ,. \displaystyle \max_{u\in l4}f(\overline{x}, u)-\min_{v\in \mathcal{V} \overline{r}g(\overline{x}, v)= $\epsilon$\min_{v\in \mathcal{V} g(\overline{x}, v). and. $\epsilon$_{0}^{1}+$\epsilon$_{0}^{2}+\displayst le\sum_{i=1}^{m+1}$\epsilon$_{i}-$\epsilon$\min_{v\in\mathcal{V}g(\overline{x},v)\leq \sum_{i=1}^{m}\overline{$\lambda$}_{ih_{i}(\overline{x},\overline{w}_{i). .. i=1 ,. ... .,. m+1,.

(8) 15. Following. the. approach. [7],. we. \displaystyle \max. r. in. (RFD). s.t.. formulate. a. dual. problem (RFD) for (RFP). as. follows. ([13]). :. 0\displaystyle \in\partial_{$\epsilon$_{\mathrm{o} ^{1} (\max_{u\in \mathcal{U} f(\cdot, u))(x)+\partial_{$\epsilon$_{0}^{2} (-r\min_{v\in \mathcal{V} g(\cdot, v) (x). +\displaystyle\sum_{i=1}^{m}\partial_{$\epsilon$}:($\lambda$_{i}h_{i}(\cdot,w_{i}) (x)+N_{C}^{$\epsilon$_{m+1} (x). ,. \displaystyle \max_{u\in \mathcal{U} f(x, u)-r\min_{v\in \mathcal{V} g(x, v)\geqq $\epsilon$\min_{v\in \mathcal{V} g(x, v). ,. $\epsilon$_{0}^{1}+$\epsilon$_{0}^{2}+\displaystyle\sum_{i=1}^{m+1}$\epsilon$_{i}-$\epsilon$\min_{v\in\mathcal{V}g(x,v)\leq \sum_{i=1}^{m}$\lambda$_{i}h_{i}(x,w_{i}). ,. r\geqq 0, w_{i}\in \mathcal{W}_{i}, $\lambda$_{i}\geqq 0, i=1, \cdots , m,. $\epsilon$_{0}^{1}\geqq 0, $\epsilon$_{0}^{2}\geqq 0, Clearly,. F. i=1 ,. .. ... ,. m+1.. :=\displaystyle \{(x, w, $\lambda$, r)|0\in\partial_{$\epsilon$_{\mathrm{O} ^{1} (\max_{u\in \mathcal{U} f(\cdot, u) (x)+\partial_{$\epsilon$_{0}^{2} (-r\min_{v\in \mathcal{V} g(\cdot, v) (x)+\sum_{i=1}^{m}\partial_{ $\epsilon$ i}($\lambda$_{i}h_{ $\eta$}(\cdot, w_{i}) (x)+. N_{\mathrm{N}_{+} ^{$\epsilon$_{2} (x) \displaystyle \max_{u\in \mathcal{U} f(x, u)-r\min_{v\in \mathcal{V} g(x, v)\geqq $\epsilon$ g(x, v) ,. 0, w_{i}\in \mathcal{W}_{i}, $\lambda$_{i}\geqq 0,. (FP),. ,. $\epsilon$_{0}^{1}+$\epsilon$_{0}^{2}+\displaystyle \sum_{i=1}^{m+1}$\epsilon$_{i}- $\epsilon$\min_{v\in \mathcal{V} g(x, v)\leq \sum_{i=1}^{m}$\lambda$_{i}h_{i}(x, w_{i}). m, $\epsilon$_{m+1}\geqq 0\} is the feasible $\epsilon$_{0}^{1}\geqq 0, $\epsilon$_{0}^{2}\geqq 0, $\epsilon$_{i}\geqq 0, i=1 (\overline{x},\overline{w},\overline{ $\lambda$},\overline{r}) is called an approximate solution of (RFD) if for ,. $\epsilon$\geqq O. Then \overline{r}\geqq r- $\epsilon$. When $\epsilon$=0, Let. becomes. $\epsilon$_{i}\geqq 0,. ... .. ,. \displaystyle \max_{u\in \mathcal{U} f(x, u)=f(x) \displaystyle \min_{v\in \mathcal{V} g(x, v)=g(x) (FD). and. ,. (RFD) collapses. and. to the Mond‐wier. \displaystyle \max. s.t.. type dual. h_{i}(x, w_{i})=h_{i}(x) problem (FD). r\geqq. (RFD). (y, w, $\lambda$, r)\in F,. set of. any. i=1 ,. ,. for. ,. (FP). \cdots. as. ,. m,. follows. (RFP) ([14]):. r. 0\displaystyle \in\partial f(x)+\partial(-rg)(x)+\sum_{i=1}^{m}\partial$\lambda$_{i}h_{i}(x)+N_{C}(x). ,. f(x)-rg(x)\geqq 0, $\lambda$_{i}h_{i}(x)\geqq 0, r\geqq 0, $\lambda$_{i}\geqq 0, Now, and. we. prove. i=1 ,. .. .. .,. m.. approximate weak and approximate strong duality theorems which hold between (RFP). (RFD).. Theorem 4.4.. [13] (Approximate. Weak. feasible (y, w, $\lambda$, r) of (RFD),. Duality Theorem). For any. feasible. x. of (RFP). and any. \displaystyle \max \underline{f(x,u)}\geqq r- $\epsilon$.. (u,v)\in l4\mathrm{x}Vg(x, v) Theorem 4.5.. [13] (Approximate Strong Duality Theorem) Suppose. that. \displayst le\bigcup_{w_{\dot{9}\in\mathcal{W}_{l,$\lambda$:\geq 0}\mathrm{e}\mathrm{p}\mathrm{i}(\sum_{i=1}^{m}$\lambda$_{i}g_{i}(\cdot,w_{i})^{*}+C^{*}\times\mathb {R}_{+} is dosed.. \overline{ $\lambda$}\in \mathb {R}. ’. If \overline{x}. and. is. an. rr_{+}. approximate solution of (RFP) and such that. (\overline{x},\overline{w},\overline{ $\lambda$},\overline{r}). is. a. \displaystyle\max_{(u,v)\in\mathcal{U}\times\mathcal{V} \frac{f(\overline{x},u)}{g(\overline{x},v)}-$\epsilon$\geq 0. 2 $\epsilon$ ‐solution. ,. then there exist. \overline{w}\in \mathbb{R}^{q},. of (RFD).. Remark 4.1. Using the optimality conditions of Theorem 4.2, robust fractional dual problem (RFD) for a robust fractional problem (RFP) in the convex constraint functions with uncertainty is formulated. However, when we formulated the dual problem using optimality condition in Theorem 4.3, we could not know whether approxlmate weak duality theorem is established, or not. It is our open question..

(9) 16. References [1]. A. Beck and A.. Lett.,. 37. Ben‐Tal, Duality. (2009),. in robust. optimization: primal. worst. equals dual best, Oper.. [2]. A. Ben‐Tal, L. E. Ghaoui and A. Nemirovski, Robust 0ptimzation, Princeton Series Mathematics, Priceton University Press, Priceton, NJ (2009).. [3]. A. Ben‐Tal and A.. Ser B, 112. [4] [5]. (2008),. A. Ben‐Tal and A.. Ser B, 92. (2002),. Nemirovski, A selected topics. in robust. convex. in. Applied. optimization, Math. Program.,. 125‐158.. Nemirovski, Robust optimization‐methodology and applications, Math. Program., 453‐480.. A. Ben‐Tal and A.. (1999),. Res.. 1‐6.. Nemirovski, Robust solutions. to uncertain linear programs,. Oper.. Res.. Lett.,. 25. 1‐13.. [6]. S. Boyd and L.. [7]. P.. Gupta,. A.. Vandenberghe, Convex optimization, Cambridge University Press, Cambridge,. 2004.. Mehra, S. Shiraishi and K. Yokoyama, $\epsilon$ ‐Optimality for minimax programming prob‐ Anal., 7(2) (2006), 277‐288.. lems. J. Nonlinear Convex. [8]. Jeyakumar, G. M. Lee and N. Dinh, New sequential Lagrange multiplier conditions characterizing optimality without constraint qualification for convex programs, SIAM J. Optim., 14(2) (2003), 534‐ V.. 547.. [9]. V.. Jeyakumar and G. Y. Li, Robust duality for fractional programming problems with uncertainty, J. Optim. Theory Appt., 151(2) (2011), 292‐303.. constraint‐. wise data. [10]. V.. Jeyakumar an G. Y. Li, Strong duality in robust Optim., 20 (2010), 3384‐3407.. convex. programming: complete characterizations,. SIAM J.. [11]. V.. Jeyakumar, G. Y. Li and G. M. Lee, Robust duality for generalized convex programming problems uncertainty, Nonlinear Anal., 75(3) (2012), 1362‐1373.. under data. [12]. J. H. Lee and G. M.. [13]. J. H. Lee and G. M.. Lee, On $\epsilon$ ‐solutions Positivity 16(3) (2012), 509‐526.. Appl.. [14]. J. C.. [15]. R. T.. 2014:501. for. convex. optimization problems with uncertainty data,. Lee, On $\epsilon$ ‐solutions for robust fractional optimization problems, J. Inequal.. (2014), doi:10.11S6/1029‐242X‐2014‐501.. Liu, Y. Kimura and K. Tanaka, Three Types Dual Model for Minimax Fractional Programming. Comput. Math. Appl., 38(7-8) (1999), 143‐155. Rockafellar, Convex Analysis, Princeton Univ. Press, Princeton, N. J, 1970.. (Jae Hyoung Lee) Department Pukyong National University. of. Applied Mathematics. Busan 48513 Korea. E‐‐mail address: mc7558@naver.com. (Gue Myung Lee) Department of Applied Pukyong National University. Busan 48513 Korea E‐‐mail address:. gmlee@pknu.ac.kr. Mathematics.

(10)

参照

関連したドキュメント

In discrete convex analysis, two convexity concepts, called L-convexity and M- convexity are defined, where “L” stands for “Lattice” and “M” for “Matroid.” L- convex

Restricting the input to n-vertex cubic graphs of girth at least 5, we apply a modified algorithm that is based on selecting vertices of minimum degree, using operations that remove

More precisely, suppose that we want to solve a certain optimization problem, for example, minimization of a convex function under constraints (for an approach which considers

Wu, “Positive solutions of two-point boundary value problems for systems of nonlinear second-order singular and impulsive differential equations,” Nonlinear Analysis: Theory,

In this paper, we derive generalized forms of the Ky Fan minimax inequality, the von Neumann-Sion minimax theorem, the von Neumann-Fan intersection theorem, the Fan-type

Matroid intersection theorem (Edmonds) Discrete separation (Frank). Fenchel-type

RIMS Summer School (COSS 2018), Kyoto, July 2018.. Discrete Convex

In this paper, we …rst present a new de…nition of convex interval–valued functions which is called as interval–valued harmonically h–convex functions. Then, we establish some