• 検索結果がありません。

Semidefinite programming reformulation for a class of robust optimization problems and its application to robust Nash equilibrium problems

N/A
N/A
Protected

Academic year: 2021

シェア "Semidefinite programming reformulation for a class of robust optimization problems and its application to robust Nash equilibrium problems"

Copied!
35
0
0

読み込み中.... (全文を見る)

全文

(1)

Semidefinite programming reformulation for a class of robust optimization problems

and its application to robust Nash equilibrium problems

Guidance

Assistant Professor Shunsuke HAYASHI Professor Masao FUKUSHIMA

Ryoichi NISHIMURA

Department of Applied Mathematics and Physics Graduate School of Informatics

Kyoto University

KYOTO UNIVER SIT

Y

F OU

ND E D 1897 KYOTO JAPAN

February 2009

(2)

Abstract

In a real situation, optimization problems often involve uncertain parameters. Robust optimization is one of distribution-free methodologies based on worst-case analyses for handling such problems.

In the model, we first assume that the uncertain parameters belong to some uncertainty sets. Then we deal with the robust counterpart associated with the uncertain optimization problem. The robust optimization problem is in its original form a semi-infinite program. Under some assumptions, it can be reformulated as an efficiently solvable problem, such as a semidefinite program (SDP) or a second-order cone program (SOCP). During the last decade, not only has robust optimization made significant progress in theory, but it has been applied to a large number of problems in various fields.

Game theory is one of such fields. For non-cooperative games with uncertain parameters, several researchers have proposed a model in which each player makes a decision according to the robust optimization policy. The resulting equilibrium is called a robust Nash equilibrium, and the problem of finding such an equilibrium is called the robust Nash equilibrium problem. It is known that the robust Nash equilibrium problem can be reformulated as a second-order cone complementarity problem under certain assumptions.

In this paper, we focus on a class of uncertain linear programs. We reformulate the robust counterpart as an SDP and show that those problems are equivalent under the spherical uncertainty assumption. In the reformulation, the strong duality for nonconvex quadratic programs plays a significant role. Also, by using the same technique, we reformulate the robust counterpart of an uncertain SOCP as an SDP under some assumptions. Furthermore, we apply this idea to the robust Nash equilibrium problem.

Under mild assumptions, we show that each player’s optimization problem can be rewritten as an SDP and the robust Nash equilibrium problem reduces to a semidefinite complementarity problem (SDCP).

We finally give some numerical results to show that those SDP and SDCP are efficiently solvable.

(3)

Contents

1 Introduction 1

2 Strong duality in nonconvex quadratic optimization with two quadratic constraints 3 3 SDP reformulation for a class of robust linear programming problems 5 4 Robust second-order cone programming problems with ellipsoidal uncertainty 12 5 SDCP reformulation of robust Nash equilibrium problems 16 5.1 Robust Nash equilibrium and its existence . . . . 16 5.2 SDCP reformulation of robust Nash equilibrium problems . . . . 18

6 Numerical experiments 22

6.1 Robust second-order cone programming problems . . . . 22 6.2 Robust Nash equilibrium problems . . . . 27

7 Concluding remarks 29

(4)

1 Introduction

In constructing a mathematical model from a real-world problem, we cannot always determine the objective function or the constraint functions precisely. For example, when parameters in the functions are obtained in a statistical or simulative manner, they usually involve uncertainty (e.g. statistical error, etc.) to some extent. To deal with such situations, we need to incorporate the uncertain data in a mathematical model.

Generally, the mathematical programming problem with uncertain data is expressed as follows:

minimize

x f 0 ( x , u )

subject to f i (x , u)K i , (i = 1, . . . , m) (1.1) where x ∈ R n is the decision variable, u ∈ R d is the uncertain data, f 0 : R n × R d → R and f i : R n × R d → R k

i

(i = 1, . . . , m) are given functions, and K i ⊆ R k

i

(i = 1, . . . , m) are given nonempty sets. Since problem (1.1) cannot be defined uniquely due to u, it is difficult to handle in a straightforward manner.

Robust optimization [13] is one of distribution-free methodologies for handling the mathematical programming problem with uncertain data. In robust optimization, the uncertain data are assumed to belong to some set U ⊆ R d , and then, the objective function is minimized (or maximized) with taking the worst possible case into consideration. That is, the following robust counterpart is solved instead of the original problem (1.1):

minimize

x sup u∈ U f 0 ( x , u )

subject to f i (x , u)K i , (i = 1, . . . , m), ∀u ∈ U . (1.2) Recently, robust optimization has been studied by many researchers. Ben-Tal and Nemirovski [9, 10, 12], Ben-Tal, Nemirovski and Roos [14], and El Ghaoui, Oustry and Lebret [21] showed that certain classes of robust optimization problems can be reformulated as efficiently solvable problems such as a semidefinite program (SDP) [36] or a second-order cone program (SOCP) [3] under the assumptions that uncertainty set is ellipsoidal and functions f i ( i = 0 , 1 , . . . , m ) in the problem (1.2) are expressed as

f i ( x , u ) = g i ( x ) + F i ( x ) u

with g i : R n → R k

i

and F i : R n → R k

i

×d . El Ghaoui and Lebret [19] showed that the robust least-squares problem can be reformulated as an SOCP. Bertsimas and Sim [16] gave another robust formulation and some properties of the solution. Also, the robust optimization techniques have been applied to many practical problems such as portfolio selection [7, 20, 23, 29, 30, 39], classification problem [38], structural design [8] and inventory management problem [1, 17].

On the other hand, in game theory, there have been a large number of studies on games with un-

certain data. Among them, the new concept of robust Nash equilibrium attracts attention recently.

(5)

Hayashi, Yamashita and Fukushima [26], and Aghassi and Bertsimas [2] *1 have proposed the model in which each player makes a decision according to the idea of robust optimization. Aghassi et al. [2]

considered the robust Nash equilibrium for N -person games in which each player solves a linear pro- gramming (LP) problem. Moreover, they proposed a method for solving the robust Nash equilibrium problem with convex polyhedral uncertainty sets. Hayashi et al. [26] defined the concept of robust Nash equilibria for bimatrix games. Under the assumption that uncertainty sets are expressed by means of the Euclidean or the Frobenius norm, they showed that each player’s problem reduces to an SOCP and the robust Nash equilibrium problem can be reformulated as a second-order cone comple- mentarity problem (SOCCP) [22, 25]. In addition, Hayashi et al. [26] studied robust Nash equilibrium problems in which the uncertainty is contained in both opponents’ strategies and each player’s cost parameters, whereas Aghassi et al. [2] studied only the latter case. More recently, Nishimura, Hayashi and Fukushima [33] extended the definition of robust Nash equilibria in [2] and [26] to the N -person non-cooperative games with nonlinear cost functions. In particular, they showed existence of robust Nash equilibria under the milder assumptions and gave some sufficient conditions for uniqueness of the robust Nash equilibrium. In addition, they reformulated certain classes of robust Nash equilibrium problems to SOCCPs. However, Hayashi et al. [26] and Nishimura et al. [33] have only dealt with the case where the uncertainty is contained in either opponents’ strategies or each player’s cost parameters, in reformulating the robust Nash equilibrium problem as an SOCCP.

In this paper, we first focus on a special class of linear programs (LPs) with uncertain data. To such a problem, we apply the strong duality in nonconvex quadratic optimization problems with two quadratic constrains studied by Beck and Eldar [5], and reformulate its robust counterpart as an SDP.

Especially, when the uncertainty sets are spherical, we further show that those two problems are equiv- alent. Also, by using the same technique, we reformulate the robust counterpart of SOCP with un- certain data as an SDP. In this reformulation, we emphasize that the uncertainty set is different from what was considered by Ben-Tal et al. [14]. We apply these ideas to game theory, too. Particularly, we show that the robust Nash equilibrium problem in which uncertainty is contained in both opponents’

strategies and each player’s cost parameters can be reduced to a semidefinite complementarity prob- lem (SDCP) [18, 37]. Finally, we give some numerical results to see that those SDP and SDCP are efficiently solvable.

This paper is organized as follows. In Section 2, we review the strong duality in nonconvex quadratic optimization problems with two quadratic constraints, which plays a key role in the SDP reformulation of the robust counterpart. In Section 3, we reformulate the robust counterpart of some LP with uncertain data as an SDP. In Section 4, we reformulate the robust counterpart of SOCP with uncertain data as an SDP. In Section 5, we first formulate the robust Nash equilibrium problem, and show that it reduces to an SDCP under appropriate assumptions. In Section 6, we give some numerical

*1

In [2] a robust Nash equilibrium is called a robust-optimization equilibrium.

(6)

results to show the validity of our reformulation and the behavior of obtained solutions.

Throughout the paper, we use the following notations. For a set X , P( X) denotes the set consisting of all subsets of X . R n + denotes the nonnegative orthant in R n , that is, R n + := {x ∈ R n | x i ≥ 0 ( i = 1 , . . . , n )} . S n denotes the set of n × n real symmetric matrices. S + n denotes the cone of positive semidefinite matrices in S n . For a vector x ∈ R n , ∥ x ∥ denotes the Euclidean norm defined by ∥ x ∥ : = √

x x . For a matrix M = ( M i j ) ∈ R m × n , ∥ MF is the Frobenius norm defined by

∥M∥ F := (m

i = 1

n

j = 1 (M i j ) 2 ) 1 / 2 , ∥M∥ 2 is the ` 2 -norm defined by ∥M∥ 2 := max x̸=0 ∥M x∥/∥x ∥, and ker M denotes the kernel of matrix M, i.e., ker M := { x ∈ R n | M x = 0}. B(x , r ) denotes the closed sphere with center x and radius r , i.e., B ( x , r ) : = { y ∈ R n | ∥ yx ∥ ≤ r } . For a problem (P), val ( P ) denotes the optimal value.

2 Strong duality in nonconvex quadratic optimization with two quadratic constraints

In this section, we study the duality theory in nonconvex quadratic programming problems with two quadratic constrains. This concept plays a significant role in reformulating the robust optimization problem as an SDP. Especially, we give sufficient conditions, shown by Beck and Eldar [5], under which there exists no duality gap.

We consider the following optimization problem:

( QP ) minimize f 0 ( x )

subject to f 1 (x) ≥ 0, f 2 (x ) ≥ 0, (2.1) where f j ( j = 0, 1, 2) are defined by f j (x) := x A j x + 2b jx + c j with symmetric matrices A j ∈ R n×n , b j ∈ R n , and c j ∈ R .

We first consider the Lagrangian dual problem to QP (2.1). The Lagrangian function L for QP (2.1) is defined by

L ( x , α, β) = {

x A 0 x + 2b 0 x + c 0 − α( x A 1 x + 2b 1 x + c 1 )β( x A 2 x + 2b 2 x + c 2 ), α, β ≥ 0

−∞, otherwise

with Lagrange multipliers α and β . By introducing an auxiliary variable λ ∈ R ∪ {−∞} , we have

α,β≥0 sup inf

x∈ R

n

L ( x , α, β)

= sup

α,β≥ 0

{ λ | L(x, α, β)λ, ∀x ∈ R n }

= sup

α,β≥ 0

{ λ

[ x

1 ] ([

A 0 b 0

b 0 c 0 − λ ]

α

[ A 1 b 1

b 1 c 1

]

β

[ A 2 b 2

b 2 c 2

]) [ x 1 ]

≥ 0 ,x ∈ R n }

= sup

α,β≥ 0

{ λ

[

A 0 b 0

b 0 c 0 − λ ]

α

[ A 1 b 1

b 1 c 1

]

β

[ A 2 b 2

b 2 c 2

]

≽ 0 }

.

(7)

Hence, the Lagrangian dual problem to (QP) is written as

(D)

maximize

α,β,λ λ

subject to

[ A 0 b 0

b 0 c 0 − λ ]

α

[ A 1 b 1

b 1 c 1

] + β

[ A 2 b 2

b 2 c 2

] α ≥ 0, β ≥ 0, λ ∈ R.

(2.2)

Since (D) is an SDP, its dual problem is

(SDR)

minimize tr( M 0 X) subject to tr ( M 1 X ) ≥ 0

tr( M 2 X) ≥ 0 X n + 1 , n + 1 = 1 , X ≽ 0 ,

(2.3)

where

M j =

[ A j b j

b j c j

]

( j = 0 , 1 , 2 ).

Now let χ( x ) be a rank-one semidefinite symmetric matrix defined by χ( x ) : = ( x

1

)( x

1

)

. Then we have f j ( x ) = ( x

1

) M j

( x

1

) = tr ( M j χ( x )) for j = 0 , 1 , 2. Thus problem (2.1) is rewritten as minimize tr ( M 0 χ( x ))

subject to tr( M 1 χ(x )) ≥ 0 tr ( M 2 χ( x )) ≥ 0 .

(2.4) Actually, problem (2.3) can be seen as a relaxation of problem (2.4) since the rank-one condition on χ(x ) is removed. In other words, problem (2.3) is the so-called semidefinite relaxation [11] of (2.4) . From the above argument, we have val ( QP ) ≤ val ( SDR ) . Hence, by using the weak duality theorem, we have

val (QP) ≤ val (SDR) ≤ val (D).

Finally, we study the strong duality. Beck and Eldar [5] considered a nonconvex quadratic opti- mization problem in the complex space and its dual problem, and showed that they have zero duality gap under strict feasibility and boundedness assumptions. Furthermore, they extended the idea to the nonconvex quadratic optimization problem in the real space, and provided sufficient conditions for zero duality gap among (QP), (D) and (SDR).

Theorem 2.1. [5, Theorem 3.5] Suppose that both (QP) and (D) are strictly feasible and that

∃ ˆ α, β ˆ ∈ R such that α ˆ A 1 + ˆ β A 2 ≻ 0 . Let ( λ, ¯ α, ¯ β) ¯ be an optimal solution of the dual problem (D). If

dim ( ker ( A 0 − ¯ α A 1 − ¯ β A 2 )) ̸= 1 ,

then val ( QP ) = val ( D ) = val ( SDR ) .

(8)

3 SDP reformulation for a class of robust linear programming problems

In this section, we focus on the following uncertain LP:

minimize

x ( γ ˆ 0 ) ( A ˆ 0 x + ˆ b 0 )

subject to ( γ ˆ i ) ( A ˆ i x + ˆ b i ) ≤ 0 ( i = 1 , . . . , K ) x,

(3.1)

where  is a given closed convex set with no uncertainty. Let U i and V i be the uncertainty sets for ˆ

γ i ∈ R m

i

and ( A ˆ i , b ˆ i ) ∈ R m

i

×( n + 1 ) , respectively. Then, the robust counterpart (RC) for (3.1) can be written as

minimize

x sup

( A ˆ

0

, b ˆ

0

)∈ U

0

, γ ˆ

0

V

0

( γ ˆ 0 ) ( A ˆ 0 x + ˆ b 0 )

subject to ( γ ˆ i ) ( A ˆ i x + ˆ b i ) ≤ 0 ∀( A ˆ i , b ˆ i )U i , ∀ ˆ γ iV i (i = 1, . . . , K ) x.

(3.2)

The main purpose of this section is to show that RC (3.2) can be reformulated as an SDP [36], which can be solved by existing algorithms such as the primal-dual interior-point method. One may think that the structures of LP (3.1) and its RC (3.2) are much more special than the existing robust optimization models for LP [10]. However, we note that the robust optimization technique in this section plays an important role in considering the robust SOCPs and the robust Nash equilibrium problems in the subsequent sections. We also note that the uncertain LP (3.1) is equivalent to the LP considered by Ben-Tal et. al [10, 11], when V i is a finite set given by V i : = { e 1 (m

i

) , . . . , e (m m

ii

) } where e (m k

i

) is a unit vector with 1 at k-th element and 0 elsewhere.

We first make the following assumptions in the uncertainty sets U i and V i : Assumption 1. For i = 0, 1, . . . , K , the uncertainty sets U i and V i are expressed as

U i :=

 

( A ˆ i , b ˆ i ) ( A ˆ i , b ˆ i ) = (A i 0 , b i0 ) +

s

i

j =1

u i j ( A i j , b i j ), (u i ) u i ≤ 1

 

,

V i : =

 

γ ˆ γ ˆ = γ i0 +

t

i

j = 1

v i j γ i j , (v i ) v i ≤ 1

 

,

respectively, where A i j ∈ R m

i

× n , b i j ∈ R m

i

( j = 0, 1, . . . , s i ) and γ i j ∈ R m

i

( j = 1, . . . , t i ) are given matrices and vectors.

Moreover, we introduce the following proposition, which plays a crucial role in reformulating RC

(3.2) to an SDP.

(9)

Proposition 3.1. Consider the following optimization problem:

maximize

u∈ R

s

, v∈ R

t

ξ(v) M ( u

subject to u u ≤ 1 , v v ≤ 1 , (3.3)

where η ∈ R n is a given constant, and M : R s → R m × n and ξ : R t → R m are defined by M ( u ) = M 0 +

s j = 1

u j M j , ξ(v) = ξ 0 +

t j = 1

v j ξ j (3.4)

with given constants M j ∈ R m×n ( j = 0 , 1 , . . . , s ) and ξ j ∈ R m ( j = 0 , 1 , . . . , t ) . Then, the following two statements hold:

(a) The Lagrangian dual problem of (3.3) is written as minimize

α,β,λλ

subject to

[ P 0 q q rλ

]

α

[ P 1 0

0 1

] + β

[ P 2 0

0 1

] , α ≥ 0 , β ≥ 0 , λ ∈ R

(3.5)

with

P 0 = − 1 2

[ 0 (4 8)

4 8 0

]

, q = − 1 2

[ 8 ξ 0 4 M 0 η

] , r = −(ξ 0 ) M 0 η P 1 =

[ − I s 0

0 0

]

, P 2 =

[ 0 0 0 − I t

] , 4 = [

ξ 1 · · · ξ t ]

, 8 = [

M 1 η · · · M t η ] .

(3.6)

Moreover, it always holds that val(3.3) ≤ val(3.5).

(b) If

dim(ker( P 0 − α P 1 − β P 2 )) ̸= 1 (3.7) for the optimum , β , λ ) of the dual problem (3.5), then it holds that val ( 3 . 3 ) = val ( 3 . 5 ) . Proof. From the definition of M(u) and ξ(v), the objective function of problem (3.3) can be rewritten as

ξ(v) M(u)η = 0 + 4v) ( M 0 η + 8u)

= v 4 8 u + 0 ) 8 u + ( M 0 η) 4v + 0 ) M 0 η

= − y P 0 y − 2q yr, where y : = ( u

v

) . Hence, problem (3.3) is equivalent to the following optimization problem:

maximize

y ∈ R

s+t

y P 0 y − 2q yr

subject to y P 1 y + 1 ≥ 0 , y P 2 y + 1 ≥ 0 . (3.8)

(10)

Now, notice that problem (3.8) is a nonconvex quadratic optimization problem with two quadratic constrains since P 0 is indefinite in general. Hence, from the results stated in Section 2, problem (3.5) serves as the Lagrangian dual problem of (3.3).

Next we show (b). From Theorem 2.1, it suffices to show that the following three statements hold:

(i) Both problems (3.3) and (3.5) are strictly feasible.

(ii) There exist α ˆ ∈ R and β ˆ ∈ R such that α ˆ P 1 + ˆ β P 2O.

(iii) dim(ker(P 0 − α P 1 − β P 2 )) ̸= 1 for the optimum , β , λ ) of problem (3.5).

Problem (3.3) is obviously strictly feasible since ( u , v) = ( 0 , 0 ) is an interior point of the feasible region. Also, problem (3.5) is strictly feasible since the inequalities in the constrains hold strictly when we choose sufficiently large α, β , and sufficiently small λ . Thus, we have (i). We can readily see (ii) since α ˆ P 1 + ˆ β P 2 ≻ 0 for any α, ˆ β ˆ such that α, ˆ β < ˆ 0. We also have (iii) from the assumption of the theorem. Hence, the optimal values of (3.3) and (3.5) are equal.

Next, by using the above proposition, we reformulate RC (3.2) as an SDP. Note that RC (3.2) is rewritten as the following optimization problem:

minimize

x f 0 (x ) := max

( A ˆ

0

, b ˆ

0

)∈ U

0

, γ ˆ

0

V

0

( γ ˆ 0 ) ( A ˆ 0 x + ˆ b 0 )

subject to f i ( x ) : = max { ( γ ˆ i ) ( A ˆ i x + ˆ b i ) | ( A ˆ i , b ˆ i )U i , γ ˆ iV i } ≤ 0 (i = 1, . . . , K ),

x.

(3.9)

Now for any fixed x ∈ R n , we evaluate max { ( γ ˆ i ) ( A ˆ i x + ˆ b i ) | ( A ˆ i , b ˆ i )U i , γ ˆ iV i } for i = 0 , 1 , . . . , K . By letting η : = ( x

1

) , M j = ( A i j , b i ) , and ξ j : = γ i j in Proposition 3.1, we have the following inequality for each i = 0, 1, . . . , K :

max { ( γ ˆ i ) ( A ˆ i x + ˆ b i ) | ( A ˆ i , b ˆ i )U i , γ ˆ iV i }

≤ min

 

 −λ i

[ P 0 i ( x ) q i ( x ) q i ( x ) r i ( x )λ i

]

α i

[ P 1 i 0

0 1

] + β i

[ P 2 i 0

0 1

] α i ≥ 0 , β i ≥ 0 , λ i ∈ R

 

, (3.10) where P 0 i (x ), q i (x) and r i (x ) are defined by

P 0 i ( x ) = − 1 2

[ 0 (0 i 8 i ( x )) 0 i 8 i ( x ) 0

]

, q i ( x ) = − 1 2

[ 8 i ( x ) γ i 0 i ( A i0 x + b i 0 )

] , r i (x ) = −(γ i ) (A i0 x + b i 0 ), P 1 i =

[ −I s

i

0

0 0

]

, P 2 i =

[ 0 0 0 − I t

i

] , 0 i = [

γ i 1 · · · γ i t ]

, 8 i ( x ) = [

A i1 x + b i 1 · · · A i s

i

x + b i s

i

] .

(3.11)

Moreover, we consider the following problem in which max { ( γ ˆ i ) ( A ˆ i x + ˆ b i ) | ( A ˆ i , b ˆ i )U i , γ ˆ iV i }

in (3.9) is replaced by the right-hand side of (3.10):

(11)

minimize

x g 0 (x ) := min

 

 −λ 0

[ P 0 0 ( x ) q 0 ( x ) q 0 ( x ) r 0 ( x )λ 0

]

α 0

[ P 1 0 0

0 1

] + β i

[ P 2 0 0

0 1

] α 0 ≥ 0 , β 0 ≥ 0 , λ 0 ∈ R

 

 subject to g i (x ) := min

 

 −λ i

[ P 0 i ( x ) q i ( x ) q i ( x ) r i ( x )λ i

]

α i

[ P 1 i 0

0 1

] + β i

[ P 2 i 0

0 1

] α i ≥ 0, β i ≥ 0, λ i ∈ R

 

 ≤ 0 (i = 1, . . . , K ),

x,

(3.12) which is equivalent to the following SDP:

minimize

x ,α,β,λλ 0

subject to

[ P 0 i ( x ) q i ( x ) q i ( x ) r i ( x )λ i

]

α i

[ P 1 i 0

0 1

] + β i

[ P 2 i 0

0 1

]

( i = 0 , 1 , . . . , K ), α = 0 , α 1 , . . . , α K ) ∈ R + K + 1 , β = 0 , β 1 , . . . , β K ) ∈ R + K + 1 , λ = 0 , λ 1 , . . . , λ K ) ∈ R × R + K ,

x.

(3.13)

Here, notice that, if the matrix inequalities in (3.13) hold with some λ i ≥ 0 ( i = 1 , . . . , K ) , then they also hold for λ i = 0. Hence, we can set λ i = 0 ( i = 1 , . . . , K ) without changing the optimal value of (3.13). That is, SDP (3.13) is equivalent to the following SDP:

minimize

x ,α,β,λ

0

λ 0

subject to

[ P 0 0 ( x ) q 0 ( x ) q 0 ( x ) r 0 ( x )λ 0

]

α 0

[ P 1 0 0

0 1

] + β 0

[ P 2 0 0

0 1

] , [ P 0 i ( x ) q i ( x )

q i (x ) r i (x ) ]

α i

[ P 1 i 0

0 1

] + β i

[ P 2 i 0

0 1

]

( i = 1 , . . . , K ), α = 0 , α 1 , . . . , α K ) ∈ R + K + 1 , β = 0 , β 1 , . . . , β K ) ∈ R + K + 1 , λ 0 ∈ R, x.

(3.14)

Consequently, we have val (3.9) ≤ val (3.12) = val (3.13) = val (3.14) where the inequality is due to f i ( x )g i ( x ) for any x ∈ R n and i = 0 , 1 , . . . , K . Moreover, we can show val (3.9) = val (3.12), under the following assumption.

Assumption 2. Let z : = ( x , α , β , λ 0 ) be an optimum of SDP (3.14). Then, there exists ε > 0 such that

dim ( ker ( P 0 i ( x )α i P 1 iβ i P 2 i )) ̸= 1 ( i = 0 , 1 , . . . , K )

for all (x, α, β, λ 0 )B(z , ε).

(12)

Theorem 3.2. Suppose that Assumption 1 holds, and ( x , α , β , λ 0 ) be the optimum of SDP (3.14), then x is feasible in RC (3.2) and val (3.14) is an upper bound of val (3.2). Moreover, x solves RC (3.2) if Assumption 2 further holds.

Proof. Since the first part is trivial from f i (x )g i (x ) for any x ∈ R n and i = 0, 1, . . . , K , we only show the last part.

Define X , Y ⊆ R n and θ, ω : R n(−∞, ∞ ] by

X = { x ∈ R n | f i ( x ) ≤ 0 ( i = 1 , . . . , K )} ∩ , Y = {x ∈ R n | g i (x ) ≤ 0 (i = 1, . . . , K )} ∩ , θ( x ) = f 0 ( x ) + δ X ( x ),

ω( x ) = g 0 ( x ) + δ Y ( x ),

where δ X and δ Y denote the indicator functions [34] of X and Y , respectively. Then, we can see that RC(3.2) and SDP(3.14) are equivalent to the unconstrained minimization problems with objective functions θ and ω, respectively. In addition, since functions f i , g i (i = 0, 1, . . . , K ) are proper and convex [15, Proposition 1.2.4(c)], θ and ω are proper and convex, too.

Let ( x , α , β , λ 0 ) be an arbitrary solution to SDP (3.14). Then, it is obvious that x minimizes ω . Moreover, from Proposition 3.1(b) and Assumption 2, there exists a closed neighborhood B ( x , ε) of x such that θ(x ) = ω(x ) for all xB(x , ε). Hence, we have

θ(x ) = ω(x )ω(x ) = θ(x ), ∀x ∈ B(x , ε). (3.15) Now, for contradiction, assume that x is not a solution to RC(3.2). Then, there must exist x ¯ ∈ R n such that θ( x ¯ ) < θ( x ) . Moreover, we have x ¯ ̸∈ B ( x , ε) from (3.15). Set α : = ε/∥ ¯ xx ∥ and

˜

x : = ( 1 − α) x + α x ¯ . Then, α( 0 , 1 ) since x ¯ ̸∈ B ( x , ε) , i.e., ∥ ¯ xx > ε . Thus, we have θ( x ˜ ) = θ(( 1 − α) x + α x ¯ )

( 1 − α)θ( x ) + αθ( x ¯ )

< ( 1 − α)θ( x ) + αθ( x ) = θ( x ),

where the first inequality follows from the convexity of θ , and the last inequality follows from θ( x ¯ ) <

θ(x ) and α > 0. However, since ∥ ˜ xx ∥ = α∥ ¯ xx ∥ = ε, we have x ˜ ∈ B(x , ε), which implies θ( x )θ( x ˜ ) from (3.15). Hence, x is an optimum of RC (3.2).

In order to see whether Assumption 2 holds or not, we generally have to check every function value in the neighborhood of the optimum. However, in some situations, it can be guaranteed more easily.

For example, suppose that at the optimum z = ( x , α , β , λ 0 ) ,

dim ( ker ( P 0 i ( x )α i P 1 iβ i P 2 i )) = 0 ( i = 0 , 1 , . . . , K ),

(13)

equivalently P 0 i ( x )α i P 1 iβ i P 2 i ≻ 0 *2 . Then, by the continuity of P 0 i ( x )α i P 1 iβ i P 2 i , it also follows P 0 i (x )α i P 1 iβ i P 2 i ≻ 0 for any z sufficiently close to z .

Moreover, when the uncertainty sets U i and V i are spherical, Assumption 2 also holds automati- cally. We will show this fact in the remainder of this section.

Assumption 3. Suppose that Assumption 1 holds. Moreover, for each i = 0 , 1 , . . . , K , matrices ( A i j , b i j ) ( j = 1 , . . . , m i ( n + 1 )) and vectors γ i j ( j = 1 , . . . , t i ) ( t i ≥ 2 ) satisfy the following.

For ( k , l ) ∈ { 1 , . . . , m i } × { 1 , . . . , n + 1 } ,

( A i j , b i j ) = ρ i e (m k

i

) ( e l (n+1) ) with j : = m i l + k ,

where ρ i is a given nonnegative constant, and e ( r p ) is a unit vector with 1 at r -th element and 0 elsewhere.

For any ( k , l ) ∈ { 1 , . . . , t i } × { 1 , . . . , t i } ,

i k ) γ il = σ i 2 δ kl ,

where σ i is a given nonnegative constant, and δ kl denotes Kronecker’s delta, i.e., δ kl = 0 for k ̸= l and δ kl = 1 for k = l.

Assumption 3 claims that U i is an m i (n + 1)-dimensional sphere with radius ρ i in the m i (n + 1)- dimensional space and V i is a t i -dimensional sphere with radius σ i in the m i -dimensional space, i.e.,

U i = { ( A ˆ i , b ˆ i ) | ( A ˆ i , b ˆ i ) = ( A i 0 , b i 0 ) + A i , δ b i ), ∥(δ A i , δ b i )∥ Fρ i } ⊂ R m

i

( n + 1 ) , V i = { ˆ γ i | ˆ γ i = γ i 0 + δγ i , ∥δγ i ∥ ≤ σ i , δγ i ∈ span { γ i j } t j

i

=1 } ⊂ R m

i

.

The following proposition provides sufficient conditions under which condition (3.7) in Proposition 3.1 holds. It also plays an important role in showing that Assumption 3 implies Assumption 2.

Proposition 3.3. Consider the optimization problem (3.3) with a given constant η ∈ R n and functions M : R s → R m × n and ξ : R t → R m defined by (3.4). Moreover, suppose that there exist nonnegative constants ρ and σ such that the following statements hold.

t , n ≥ 2.

s = m ( m + 1 ) . Moreover, M j ( j = 1 , . . . , s ) are given by

M j = ρ e (m) k ( e l (n) ) with j : = ml + k , for each k = 1, . . . , m and l = 1, . . . , n.

For any ( k , l ) ∈ { 1 , . . . , t } × { 1 , . . . , t } , k ) ξ l = σ 2 δ kl .

*2

By the constraints of SDP (3.14), P

0i

( x

)α

i

P

1i

β

i

P

2i

≽ 0 always holds at the optimum ( x

, α

, β

, λ

0

, ) .

(14)

Then, for P 0 , P 1 and P 2 defined by (3.6), it holds that

dim ( ker ( P 0 − α P 1 − β P 2 )) ̸= 1 for any (α, β) ∈ R × R , and hence val (3.3) = val (3.5).

Proof. Let P (α, β) : = P 0 − α P 1 − β P 2 . Then, since P (α, β) is symmetric, it suffices to show that the multiplicity of zero eigenvalues of P (α, β) can never be 1.

We first define matrices 4 and 8 by (3.6). By Assumption 3, we have the following equalities:

4 4 = [

ξ 1 · · · ξ t ] [

ξ 1 · · · ξ t ]

= σ 2 I , 8 = [

M 1 η · · · M t η ]

= ρ [

e 1 (m) ( e (n) 1 ) η e (m) 2 ( e (n) 1 ) η · · · e (m) m ( e (n) n ) η ]

= ρ [[

η 1 e 1 (m) η 1 e (m) 2 · · · η 1 e m (m)

] · · · [

η n e (m) 1 η n e (m) 2 · · · η n e (m) m ]]

= ρ [

η 1 I m · · · η n I m

] .

Therefore,

4 88 4 = 4 2 ∥η∥ 2 I m )4

= ρ 2 σ 2 ∥η∥ 2 I t .

Now we consider the eigenvalue equation det ( P (α, β)ζ I ) = 0. If ζ ̸= α , then we have det ( P (α, β)ζ I ) = det

([ ζ) I mn1 2 (4 8)

1 2 4 8 ζ) I t ])

= det [(α − ζ)I mn ] · det [

ζ ) I t − 1

4 ζ ) 4 88 4 ]

= ζ) mn t det [(

ζ)(βζ) − 1

4 ρσ ∥η∥ 2 )

I t ]

= ζ) mn t (

ζ)(βζ ) − 1

4 ρσ ∥η∥ 2 ) t

, (3.16)

where the second equality follows from the Schur complement [24, Theorem 13.3.8]. Moreover, since det(P(α, β) − ζ I ) is continuous at any (α, β, ζ), equality (3.16) is valid at ζ = α *3 . Since we have mnt ≥ 2 from t , n ≥ 2 and tm, (3.16) indicates that the multiplicity of all eigenvalues of P (α, β) is greater than 2. Hence, even if P (α, β) has zero eigenvalue, the multiplicity cannot be 1.

By the above proposition, we obtain the following theorem.

Theorem 3.4. Suppose Assumption 3 holds. Then, x solves RC (3.2) if and only if there exists , β , λ 0 ) such that ( x , α , β , λ 0 ) is an optimal solution of SDP (3.14).

*3

From the continuity, we have lim

ζ→α,˜ ζ̸=α˜

det ( P (α, β) − ˜ ζ I ) = det ( P (α, β)α I )

(15)

Proof. In a way similar to the proof of Theorem 3.2, we evaluate max { ( γ ˆ i ) ( A ˆ i x + ˆ b i ) | ( A ˆ i , b ˆ i )U i , γ ˆ iV i } for each i = 0, 1, . . . , K , in (3.9). In Proposition 3.3, let η := ( x

1

) and M j := (A i j , b i j ), and ξ j := γ i j . Then, for all x ∈ R n and α, β ∈ R, we can see that

dim ( ker ( P 0 i ( x )α i P 1 iβ i P 2 i )) ̸= 1 , ( i = 0 , 1 , . . . , K ).

From Proposition 3.1, we have f i (x ) = g i (x ) for all x ∈ R n . Hence, problems (3.9) is identical to (3.12). This completes the proof.

In Theorem 3.2, the optimality of SDP (3.14) is nothing more than a sufficient condition for the optimality of RC (3.2) under appropriate assumptions. However, Theorem 3.4 shows not only the sufficiency but also the necessity. This is due to the fact that Assumption 3 guarantees f i ( x ) = g i ( x ) for all x ∈ R n , though Assumption 2 guarantees it only in a neighborhood of the SDP solution.

4 Robust second-order cone programming problems with ellipsoidal uncertainty

The second-order cone programming problem (SOCP) is expressed as follows:

minimize

x f x

subject to M i x + q iK n

i

( i = 1 , . . . , K ), x,

(4.1)

where K n

i

denotes the n i -dimensional second-order cone defined by K n

i

:= {(x 0 , x ¯ ) ∈ R×R n

i

1 | x 0 ≤ ∥ ¯ x∥} and  is a given closed convex set. SOCP is applicable to many practical problems such as the antenna array weight design problems and the truss design problems [3, 31]. We note that the second-order cone constraints M i x + q iK n

i

( i = 1 , . . . , K ) in (4.1) are rewritten as

A i x + b i ∥ ≤ ( c i ) x + d i with M i = ( ( c

i

)

A

i

) and q i = ( d

i

b

i

) . In this section, we consider the following uncertain SOCP:

minimize

x f x

subject to ∥ ˆ A i x + ˆ b i ∥ ≤ ( c ˆ i ) x + ˆ d i ( i = 1 , . . . , K ), x,

(4.2)

where A ˆ i ∈ R m

i

× n , b ˆ i ∈ R m

i

, c ˆ i ∈ R n and d ˆ i ∈ R are uncertain data with uncertainty set U i . Then, the robust counterpart (RC) for (4.2) can be written as

minimize

x f x

subject to ∥ ˆ A i x + ˆ b i ∥ ≤ ( c ˆ i ) x + ˆ d i , ∀( A ˆ i , b ˆ i , c ˆ i , d ˆ i )U i , ( i = 1 , . . . , K ),

x.

(4.3)

(16)

Throughout this section, we assume m i ≥ 2 for all i = 1 , . . . , K *4

Ben-Tal and Nemirovski [9] showed that RC (4.3) can be reformulated as an SDP in the case where the uncertainty sets for ( A ˆ i , b ˆ i ) and ( c ˆ i , d ˆ i ) are independent and can be represented with two ellipsoids as

U L

i

= { ( A ˆ i , b ˆ i ) | ( A ˆ i , b ˆ i ) = ( A i 0 , b i 0 ) +

l j = 1

u i j ( A i j , b i j ), ( u i ) u i ≤ 1 }, U R

i

= { ( c ˆ i , d ˆ i ) | ( c ˆ i , d ˆ i ) = ( c i0 , d i0 ) +

r j = 1

v i j ( c i j , d i j ), (v i ) v i ≤ 1 },

with given constants A i j , b i j , c i j and d i j . However, according to Ben-Tal and Nemirovski [13], it was an open problem until quite recently whether or not RC (4.3) can be reformulated as an SDP under the following assumption (one ellipsoid case).

Assumption 4. The uncertainty sets U i (i = 1, . . . , K ) in RC (4.3) are given by U i =

 

[ A ˆ i b ˆ i ( c ˆ i ) d ˆ i

] [ A ˆ i b ˆ i ( c ˆ i ) d ˆ i ]

=

[ A i 0 b i0 ( c i 0 ) d i 0 ]

+

s

i

j =1

u i j

[ A i j b i j (c i j ) d i j ]

, (u i ) u i ≤ 1

 

, where A i j , b i j , c i j and d i j ( i = 1 , . . . , K , j = 0 , 1 , . . . , s i ) are given constants.

In this section, we show that the robust counterpart can be reformulated as an explicit SDP under this assumption, using the results in the previous section *5 .

We first rewrite RC (4.2) in the form RC (3.1). To this end, we introduce the following result in semi-infinite programming [32, Section 4].

Proposition 4.1. Let A ∈ R m×n , b ∈ R m , c ∈ R n and d ∈ R be given. Then x ∈ R n satisfies the inequalityAx + b ∥ ≤ c x + d if and only if x satisfies γ ˜ ( Ax + b )c x + d for all γ ˜ ∈ R m such that ∥ ˜ γ ∥ ≤ 1.

*4

If m

i

= 1 for some i, then the constraint can be rewritten as two linear inequalities −(ˆ c

i

)

x + ˆ d

i

≤ ˆ A

i

x + ˆ b

i

c

i

)

x + ˆ d

i

. So existing frameworks can be applied. (See Ben-Tal and Nemirovski [10]).

*5

Fairly recently, it has been shown that another SDP reformulation is possible by Hildebrand’s Lorentz-positivity re-

sults [27, 28]. However, our approach has an advantage in terms of computational complexity. We state the details at the

end of this section.

(17)

By this proposition, RC (4.3) can be rewritten as follows:

minimize

x f x

subject to ( γ ˆ i )

([ A ˆ i

−( c ˆ i ) ]

x + [ b ˆ i

− ˆ d i ])

≤ 0,

∀( A ˆ i , b ˆ i , c ˆ i , d ˆ i )U i , ∀ ˆ γ iV i := {

(( γ ˜ i ) , 1) ∥ ˜ γ i ∥ ≤ 1 } ( i = 1 , . . . , K ),

x.

(4.4)

Clearly, problem (4.4) belongs to the class of problems of the form RC (3.2). In addition, when As- sumption 4 holds, Assumption 1 also holds by setting V i : = { ˆ γ i | ˆ γ i = γ i 0 + ∑ m

i

j = 1 v i j γ i j , (v i ) v i ≤ 1} with γ i 0 = e m ( m

i

+ 1 )

i

+1 , γ i j = e ( j m

i

) ( j = 1, . . . , m i ). Thus, we have the following theorem, whose proof is omitted since it readily follows from Theorem 3.2.

Theorem 4.2. Suppose that Assumption 4 holds. Let (x , α , β ) be an optimal solution of the fol- lowing SDP:

minimize

x ,α,β f x

subject to

[ P 0 i (x) q i (x) q i (x) r i (x) ]

α i

[ P 1 i 0

0 1

] + β i

[ P 2 i 0

0 1

]

( i = 1 , . . . , K ), α = 1 , . . . , α K ) ∈ R + K , β = 1 , . . . , β K ) ∈ R + K ,

x,

(4.5)

where

P 0 i ( x ) = − 1 2

[ 0 9 i ( x ) 9 i ( x ) 0

]

, q i ( x ) = − 1 2

[ −ψ i (x ) A i 0 x + b i0

] , r i ( x ) = ( c i 0 ) x + d i 0 , P 1 i =

[ − I s

i

0

0 0

]

, P 2 i =

[ 0 0 0 − I m

i

− 1

] , ψ i ( x ) = [

( c i1 ) x + d i1 · · · ( c i s

i

) x + d i s

i

] , 9 i (x ) = [

A i 1 x + b i 1 · · · A i s

i

x + b i s

i

] .

(4.6)

Then, x solves RC (4.3) if

dim(ker( P 0 i (x )α i P 1 iβ i P 2 i )) ̸= 1 (i = 0, 1, . . . , K ) (4.7) in an neighborhood of (x , α , β ).

We can easily see that condition (4.7) is guaranteed to hold if

P 0 i ( x )α i P 1 iβ i P 2 i ≻ 0 , (4.8)

by using similar arguments to those just after Theorem 3.2. Also when the uncertainty sets are spheri-

cal, condition (4.7) is satisfied and hence the following theorem holds.

Table 1 shows that, in the spherical case, the proposed SDP reformulation approach finds the original RC solution for all instances
Table 4 shows that our SDP reformulation approach solves all test problems within a reasonable time, whereas Hildebrand-based approach is much more expensive and does not work anymore for
Fig. 1 Trajectory of player 1’s strategy at the robust Nash equilibria with respect to σ
Fig. 2 Trajectory of player 2’s strategy at the robust Nash equilibria with respect to σ

参照

関連したドキュメント

In this paper we consider a class of symbols of infinite order and develop a global calculus for the related pseudodifferential operators in the functional frame of the

Based on Lyapunov stability theory and linear matrix inequality LMI formulation, a simple linear feedback control law is obtained to enforce the prespecified exponential decay

In this paper, we we have illustrated how the modified recursive schemes 2.15 and 2.27 can be used to solve a class of doubly singular two-point boundary value problems 1.1 with Types

In recent years, singular second order ordinary differential equations with dependence on the first order derivative have been studied extensively, see for example [1-8] and

Section 3: infinitely many solutions for quasi-linear problems with odd nonlinear- ities; existence of a weak solution for a general class of Euler’s equations of multiple integrals

36 investigated the problem of delay-dependent robust stability and H∞ filtering design for a class of uncertain continuous-time nonlinear systems with time-varying state

Zembayashi, “Strong and weak convergence theorems for equilibrium problems and relatively nonexpansive mappings in Banach spaces,” Nonlinear Analysis: Theory, Methods

Motivated by the ongoing research in this field, in this paper we suggest and analyze an iterative scheme for finding a common element of the set of fixed point of