• 検索結果がありません。

Discrete Convex Analysis III: Algorithms for Discrete Convex Functions Kazuo Murota

N/A
N/A
Protected

Academic year: 2022

シェア "Discrete Convex Analysis III: Algorithms for Discrete Convex Functions Kazuo Murota"

Copied!
28
0
0

読み込み中.... (全文を見る)

全文

(1)

RIMS Summer School (COSS 2018), Kyoto, July 2018

Discrete Convex Analysis III:

Algorithms for Discrete Convex Functions

Kazuo Murota

(Tokyo Metropolitan University)

180726rimsCOSS3

(2)

Contents of Part III

Algorithms for Discrete Convex Functions

A1. Minimization (General)

A2. M-convex Minimization

A3. L-convex Minimization

A4. M-convex Intersection

(3)

A1.

Minimization (General)

(4)

Descent Method

f (x)

W

S0: Initial sol x

S1: Minimize f (x) in nbhd of x to obtain x

S2: If f (x ) f (x ) , return x (local opt) S3: Update x := x ; go to S1

What is neighborhood ?

(5)

Neighborhood for Local Optimality

separable convex

2n + 1

integrally convex

3

n

L-

convex

2

n+1

1

M-

convex

n(n + 1) + 1

e

1

, . . . , ± e

n

} { χ

X

χ

Y

} χ

X

} { e

i

e

j

}

(6)

Local Optimality

x

#neigh poly-time algorithm -bors local opt global opt

submodular

2

n Y

(set fn)

separable-conv

2n

Y

integrally-conv

3

n N L-conv (Zn)

2

n Y M-conv (Zn)

n

2 Y

(7)

Scaling and Proximity

-

x

6

2 1 0 1 2 3 4

f1(x)

f2(x/2)

Proximity theorem:

True minimum

exists

in a neighborhood of

a scaled local minimum

efficient algorithm

Facts in DCA:

Scaling preserves L-convexity

Scaling does NOT preserve M-convexity

Proximity thms known for L-conv and M-conv

(8)

Minimization

x

#neigh poly-time algorithm -bors local opt global opt

submodular

2

n Y Y

(set fn)

separable-conv

2n

Y Y

integrally-conv

3

n N N

L-conv (Zn)

2

n Y Y

M-conv (Zn)

n

2 Y Y

(9)

A2.

M-convex Minimization

(10)

Local vs Global Opt (M-conv)

Thmf : Z n R M-convex

(Murota 96)

x : global opt

⇐⇒ local opt f (x ) f (x ∗ − ei + ej ) ( i, j )

Ex:

x + (0, 1, 0, 0, 1, 0, 0, 0) Can check with

n

2 fn evals

x

For M

-convex fn

x

(11)

Steepest Descent for M-convex Fn

S0: Find a vector x

dom f

S1: Find

(i, j )

that minimizes

f (x e

i

+ e

j

)

S2: If

f (x) f (x e

i

+ e

j

)

, stop

(

x

: minimizer)

S3: Set x := x ei + ej and go to S1

- 6

i j

x x

I

Minimizer Cut Thm

(Shioura 98)

minimizer

x

with

x

(i) x(i) 1

,

x

(j ) x(j )+1

Murota 03, Shioura 98, 03, Tamura 05

Dress–Wenzel’s alg for valuated matroid

Kalaba’s alg for min spanning tree

(12)

Min Spanning Tree Problem

T

e

e

edge length d : E R total length of T

d(T ˜ ) =

e T

d(e)

Thm

T : MST ⇐⇒ d(T ˜ ) d(T ˜

e

+

e

)

⇐⇒ d(e) d(e

) if T

e

+

e

is tree Algorithm Kruskal’s, Kalaba’s

DCA view

linear optimization on an M-convex set

M-optimality: f(x) f(x ei + ej)

(13)

Tree: Exchange Property

T

e

Given pair of trees

T

e e

. . . .

T e + e

e e

New pair of trees

T

+ e e

e e

Exchange property: For any

T , T

∈ T

, e

T \ T

there exists e

T

\ T

s.t.

T

e

+

e

∈ T

,

T

+

e

e

∈ T

(14)

Kruskal’s Greedy Algorithm for MST

Kruskal (1959)

S0: Order edges by length:

d(e

1

) d(e

2

) ≤ · · ·

S1:

T =

;

i = 1

S2: Pick edge

e

i

S3: If

T + e

i contains a cycle, discard

e

i S4: Update

T = T + e

i;

i = i + 1

; go to S2

T

(15)

Kalaba’s Algorithm for MST

Kalaba (1960), Dijkstra (1960)

S0:

T =

any spannning tree

S1: Order e

̸∈ T

by length:

d(e

1

) d(e

2

) ≤ · · · k = 1

S2: ek = longest edge s.t.

T

ek

+

ek is tree S3:

T = T

ek

+

ek;

k = k + 1

; go to S2

T

ek

ek

(16)

A3.

L-convex Minimization

(17)

Local vs Global Opt (L -conv)

Thmg : Z n R L -convex

(Murota 98,03)

p : global opt

⇐⇒ local opt g(p ) g (p

±

q ) ( q ∈ { 0, 1 } n ) Ex:

p + (0, 1, 0, 1, 1, 1, 0, 0)

p ⇐⇒ ρ

±

(X ) = g(p

±

χX ) g (p )

takes min at X =

Can check with

n

5 (or less) fn evals using submodular fn min algorithm

(Iwata-Fleischer-Fujishige, Schrijver,Orlin, Lee-Sidford-Wong)

(18)

Steepest Descent for L -convex Fn

(Murota 00, 03, Kolmogorov-Shioura 09, Murota-Shioura 14)

S0: Find a vector

p

dom g

and set

p := p

S1: Find

ε = ± 1

and

X

that minimize

g(p + εχ

X

)

S2: If

g(p) g (p + εχ

X

)

, stop (

p

: minimizer)

S3: Set p := p + εχX and go to S1

Thm:

(Murota-Shioura 14)

Termination exactly in

µ(p

) + 1

iterations, where

µ(p

) = min {∥ p

p

+

+ p

p

| p

arg min g }

q

+

= max

i

max(0, q(i)), q

= max

i

max(0, q(i))

(19)

Monotone Steepest Descent for L -convex Fn

S0: Find a vector

p

dom g

s.t

{ q | q p

} ∩ argmin g ̸ =

and set

p := p

S1: Find

X

that minimizes

g (p + χ

X

)

S2: If

g(p) g (p + χ

X

)

, stop (

p

: minimizer) S3: Set p := p + χX and go to S1

Thm:

(Murota-Shioura 14)

Termination exactly in

µ(p ˆ

) + 1

iterations, where

ˆ

µ(p

) = min {∥ p

p

| p

arg min g, p

p

}

Application to ascending auction

(20)

Steepest Descent Path for L -convex Fn

0 0 0 1 2 0

0 0 0 1 1

0 0 0 1 2

1 1 1

1 3

2 2 2 O2

= g

µ(p

) = ˆ µ(p

) = 2

p

, argmin g

= 1

µ(p

) = ˆ µ(p

) = 2

p

, argmin g

= 2

(21)

Shortest Path Problem

(one-to-all)

one vertex (

s

) to all vertices, length

0

, integer

Dual LP

Maximize

Σ p(v)

subject to

p(v ) p(u) ℓ(u, v ) (u, v) p(s) = 0

Algorithm Dijkstra’s

DCA view

linear optimization on an L-convex set (in polyhedral description)

Dijkstra’s algorithm (Murota-Shioura 12)

= steepest ascent for L-concave maximization with uniform linear objective (1, 1, . . . , 1)

(22)

Optimality & Proximity Theorems

Func Class Optimality Proximity

L-convex f(x) f(x + χS) (S) f(x + 1) = f(x) (M. 01)

xxα∥ ≤ (n1)(α1)

(Iwata-Shigeno 03)

M-convex f(x) f(x χu + χv) (u, v V ) (M. 96)

xxα∥ ≤ (n1)(α1)

(Moriguchi-M.-Shioura 02)

L2-convex (L2L convol)

f(x) f(x + χS) (S)

f(x + 1) = f(x) xxα∥≤2(n1)(α1)

(M.-Tamura 04)

M2-convex (M+M)

f(x) f(x χU + χW)

(U, W; |U| = |W|) (M. 01)

xxα∥ ≤ n221)

(M.-Tamura 04)

integrally convex

f(x) f(x χU + χW)

(U, W) (Favati-Tardella 90)

xxα∥ ≤ (n+1)!2n1 1)

(Moriguchi-M.-Tamura -Tardella 16)

∥ · ∥ = ∥ · ∥

(23)

A4.

M-convex Intersection

(Fenchel Duality)

(24)

Intersection Problem ( f 1 + f 2 )

Recall: L + L L, M + M ̸⇒ M

M-convex Intersection Algorithm:

Minimizes

f

1

+ f

2 for M-convex

f

1,

f

2

Maximizes

f

1

+ f

2 for M-concave

f

1,

f

2 (submodular function maximization)

Fenchel duality (min = max)

Valuated matroid intersection (Murota 96)

Weighted matroid intersection

(Edmonds, Lawler, Iri-Tomizawa 76, Frank 81)

(25)

M-convex Intersection: Min [M +M ]

M+M is NOT M

f 1

,

f 2

: M-convex (Zn

R),

x ∗ ∈ dom f

1

dom f

2

(1)

x

minimizes

f

1

+ f

2 (Murota 96)

⇐⇒ ∃ p

(certificate of optimality)

x

minimizes

f

1

(x) − ⟨ p, x

(M-opt thm)

x

minimizes

f

2

(x) + p, x

(M-opt thm)

(2)

argmin (f

1

+ f

2

) = argmin (f

1

p) argmin (f

2

+ p)

(3)

f

1,

f

2 are integer-valued

integral

p

(26)

M-convex Intersection Algorithms

Natural extensions of

weighted (poly)matroid intersection algorithms

Exchange arcs are weighted

i i

j

k

f1(x ei + ej)?

f1(x) 6

f2(x ei + ek)

f2(x)

“upper-bound lemma”

“unique-max lemma”

cycle-canceling (Murota 96, 99)

successive shortest path (Murota-Tamura 03)

scaling (Iwata-Shigeno 03, Iwata-Moriguchi-Murota 05)

(27)

Convolution

Convolutions of M-convex functions:

(f

12

f

2

)(x) = min

y

(f

1

(y ) + f

2

(x y)) (f

12

f

22

f

3

)(x), (f

12

f

22

· · ·

2

f

k

)(x)

can be computed by M-convex intersection algorithms cf. aggregated utility function

f

1

f

2

f

12

f

2

(28)

E N D

参照

関連したドキュメント

As with M¨ obius groups, we define the limit set L(G) of the convergence group G to be the set of all limit points of those sequences { f n } converging in the sense of (ii)..

Matroid intersection theorem (Edmonds) Discrete separation (Frank). Fenchel-type

&BSCT. Let C, S and K be the classes of convex, starlike and close-to-convex functions respectively. Its basic properties, its relationship with other subclasses of S,

The main purpose of this paper is to establish new inequalities like those given in Theorems A, B and C, but now for the classes of m-convex functions (Section 2) and (α,

Specializing the members of Chebyshev systems, several applications and ex- amples are presented for concrete Hermite–Hadamard-type inequalities in both the cases of

Kapur and Kumer (1986) have used the principle of dynamical programming to prove major inequalities due to Shannon, Renyi, and Hölder..

We present a novel approach to study the local and global stability of fam- ilies of one-dimensional discrete dynamical systems, which is especially suitable for difference

Tempelman has proved mean ergodic theorems for averages on semisimple Lie group using spectral theory, namely the. Howe-Moore vanishing of matrix coefficients theorem (1980’s),