• 検索結果がありません。

We consider the following optimization problem. We wish to find the maximum value of the linear function in n variables

N/A
N/A
Protected

Academic year: 2021

シェア "We consider the following optimization problem. We wish to find the maximum value of the linear function in n variables"

Copied!
5
0
0

読み込み中.... (全文を見る)

全文

(1)

The simplex algorithm Lars Hesselholt

1. The optimization problem

We consider the following optimization problem. We wish to find the maximum value of the linear function in n variables

(1.1) f (x

1

, . . . , x

n

) = c

1

x

1

+ · · · + c

n

x

n

under the assumption that variables x

1

, . . . , x

n

be non-negative and satisfy the m linear inequalities

(1.2)

a

1,1

x

1,1

+ a

1,2

x

1,2

+ · · · + a

1,n

x

1,n

6 b

1

a

2,1

x

1,1

+ a

2,2

x

1,2

+ · · · + a

2,n

x

1,n

6 b

2

: a

m,1

x

1,1

+ a

m,2

x

1,2

+ · · · + a

m,n

x

1,n

6 b

m

Let H

i

⊂ R

n

be the subset of solutions to the ith linear inequality in (1.2), and let H

j0

⊂ R

n

be the set of solutions to the linear inequality x

j

> 0. Then H

i

and H

j0

are closed halfspaces of R

n

. We wish to find the maximum value of the function (1.1) on the feasible region P ⊂ R

n

defined by the intersection

(1.3) P = \

16i6n

H

i0

∩ \

16j6m

H

j

.

The structure of the feasible region P is given by the main theorem of polytopes which we now recall. We refer to [1, Thm. 1.1] for the proof.

Theorem 1.4 . Let P ⊂ R

n

be the intersection of a finite number of closed halfspaces and suppose that P ⊂ R

n

is bounded. Then P is equal to the convex hull of a finite subset V ⊂ R

n

.

We recall that the convex hull of the finite subset V = {v

1

, . . . , v

N

} ⊂ R

n

is defined to be the subset P ⊂ R

n

of all linear combinations a

1

v

1

+ · · · + a

N

v

N

such that all a

i

> 0 and a

1

+ · · · + a

N

= 1. The convex hull P ⊂ R

n

of a finite subset of V ⊂ R

n

is called a convex polytope, and if V is minimal with this property, then V is called the set of vertices of the convex polytope P.

Corollary 1.5 . Let P ⊂ R

n

be the convex hull of the finite set V ⊂ P , and let f : R

n

→ R be a linear function. Then

max{f (x) | x ∈ P } = max{f (v) | v ∈ V }.

1

(2)

Proof. Since V ⊂ P , we clearly have

max{f (x) | x ∈ P } > max{f (v) | v ∈ V }.

Conversely, every element x ∈ P can be written x = P

v∈V

a

v

v with a

v

> 0 and P

v∈V

a

v

= 1. Since f is linear, we find f (x) = X

v∈V

a

v

f (v) 6 X

v∈V

a

v

max{f (v) | v ∈ V } = max{f (v) | v ∈ V }.

This completes the proof.

We return to the optimization problem at hand. Suppose that the feasible region P is bounded. Then Cor. 1.5 shows that to find the maximum of the function f on P , we need only compare the values of f at the finitely many vertices of P . However, in real world examples, the number of vertices will be too large for this to be practical.

2. A reformulation

Two vertices in the feasible region P are called neighbors if the line segment between them lies in the boundary of P . The simplex algorithm produces a sequence (2.1) v

0

, v

1

, . . . , v

s

. . .

of vertices of the convex polytope P such that v

s

and v

s−1

are neighbors and f (v

s−1

) 6 f (v

s

). The increasing sequence of values

(2.2) f (v

0

), f (v

1

), . . . , f(v

s

), . . .

becomes constant, for s large, and this constant value is the desired maximum value of the optimization problem. In short, the algorithm is as follows. We first choose some vertex v

0

, say, the origin. Then, given v

s−1

, we choose v

s

to be a neighboring vertex of v

s−1

for which the increase in the value of f is maximal.

It is possible to construct bad examples where the simplex algorithm visits all ver- tices of P before arriving at the optimal vertex. In practice, however, the algorithm is very effective. Moreover, it is easy to implement as we will now see.

We first rewrite the linear inequalities (1.2) and the requirement that the vari- ables x

1

, . . . , x

n

be non-negative in the following equivalent form. We introduce m additional variables x

n+1

, . . . , x

m+n

and ask that the linear equations

(2.3)

a

1,1

x

1,1

+ a

1,2

x

1,2

+ · · · + a

1,n

x

1,n

+ x

n+1

= b

1

a

2,1

x

1,1

+ a

2,2

x

1,2

+ · · · + a

2,n

x

1,n

+ x

n+2

= b

2

: a

m,1

x

1,1

+ a

m,2

x

1,2

+ · · · + a

m,n

x

1,n

+ x

m+n

= b

m

be satisfied and that all variables x

1

, . . . , x

n

, x

n+1

, . . . , x

m+n

be non-negative. The new variables x

n+1

, . . . , x

m+n

are called the slack variables and measure how close the original inequalities are to being equalities. Written in matrix form, this of

2

(3)

linear equation becomes

(2.4)

a

1,1

a

1,2

. . . a

1,n

1 0 . . . 0 a

2,1

a

2,2

. . . a

2,n

0 1 . . . 0

: : : : : :

a

m,1

a

m,2

. . . a

m,n

0 0 . . . 1

 x

1

x

2

: x

n

x

n+1

: x

m+n

=

 b

1

b

2

: b

m

We say that a solution (x

1

, . . . , x

m+n

) to the system of linear equations (2.4) is a feasible solution, if all x

1

, . . . , x

m+n

are non-negative, and we say that a feasible solution (x

1

, . . . , x

m+n

) is a basic feasible solution, if exactly n of the variables x

1

, . . . , x

m+n

are equal to zero. The map

(x

1

, . . . , x

n

, x

n+1

, . . . , x

m+n

) 7→ (x

1

, . . . , x

n

)

that forgets the slack variables gives a one-to-one correspondance between the set of feasible solutions (resp. basic feasible solutions) of (2.4) and the set of points (resp. vertices) of the feasible region (1.3). Moreover, two vertices in the fesible region (1.3) are neighbors if and only if the corresponding basic feasible solutions of (2.4) share exactly n − 1 zeros.

3. The algorithm

We now explain Danzig’s algorithm for solving the optimization problem. It is customary to write the system of linear equations (2.4) in the following form which is called a tableau. We also include, as the bottom row, the negative of the coefficients of the function f (x

1

, . . . , x

n

) we wish to maximize.

x

1

x

2

. . . x

n

x

n+1

x

n+2

. . . x

m+n

a

1,1

a

1,2

. . . a

1,n

1 0 . . . 0 b

1

a

2,1

a

2,2

. . . a

2,n

0 1 . . . 0 b

2

: : : : : : :

a

m,1

a

m,2

. . . a

m,n

0 0 . . . 1 b

m

−c

1

−c

2

−c

n

0 0 0

The tableau above is called the initial tableau. The algorithm also uses a parti- tion of the variables x

1

, . . . , x

n

, x

n+1

, . . . , x

m

into a group m variables called the basic variables and a group of n variables called the non-basic variables. In the initial partition, the m slack variables x

n+1

, . . . , x

m+n

are the basic variables and the original variables x

1

, . . . , x

n

are the non-basic variables. Each iteration of the algorithm produces a new tableau and a new partition of the variables.

Step 1: Are all the entries in the bottom row of the tableau non-negative? If yes, stop; the current tableau is the final tableau. If no, go to Step 2.

Step 2: Choose a variable x

s

such that the corresponding entry in the bottom row has as large a negative value as possible. We call x

s

the entering basic variable and we call the sth column the pivot column.

3

(4)

Step 3: Does there exist a positive entry in the pivot column between the two horizontal lines? If yes, go to Step 4. If no, stop; the function f is unbounded on the feasible region P (which is unbounded).

Step 4: Let α

i,s

be the ith entry in the pivot column, and let β

i

be the ith entry in the column on the right-hand side of the vertical line. Then among the positive entries in the pivot column, we choose an entry α

r,s

such that β

r

r,s

is as small as possible. The entry α

r,s

is called the pivot. There is a unique basic variable x

t

such that, in the tth column, the rth entry α

r,t

is non-zero. We call x

t

the leaving basic variable.

Step 5: Change the partition of the variables such that the entering basic variable x

s

becomes a basic variable and the leaving basic variable x

t

becomes a non-basic variable. Change the tableau by applying elementary row operations as follows.

Let R

i

be the ith row. (We include the bottom row.) Then, for i 6= r, we replace the ith row R

i

by α

r,s

R

i

− α

i,s

R

r

.

Given the final tableau, we find the final basic feasible solution as follows. We set all non-basic variables in the final partition equal to zero and solve for the basic variables (using the final tableau). The final basic feasible solution is the solution to the optimization problem.

4. An example

Let us work out a simple example. We wish to maximize the function f (x

1

, x

2

, x

3

) = x

1

+ 2x

2

+ 3x

3

subject to the constraints that x

1

, x

2

, and x

3

be non-negative and satisfy the linear inequalities

7x

1

+ x

3

6 6 x

1

+ 2x

2

6 20 3x

2

+ 4x

3

6 30

Since we have 3 linear inequalities, we introduce 3 slack variables x

4

, x

5

, and x

6

. The initial tableau takes the form

x

1

x

2

x

3

x

4

x

5

x

6

7 0 1 1 0 0 6

1 2 0 0 1 0 20

0 3 4 0 0 1 30

−1 −2 −3 0 0 0

and the initial basic variables are x

4

, x

5

, and x

6

. We apply the simplex algorithm.

Iteration 1:

Step 1: Not all entries in the bottom row are non-negative.

Step 2: The entering basic variable is x

3

and the third column is the pivot column.

Step 3: There are positive entries in the pivot column.

4

(5)

Step 4: Since 30/4 > 6/1, the pivot is the top entry 1 of the pivot column. The leaving basic variable is x

4

.

Step 5: The new basic variables are x

3

, x

5

, and x

6

. To find the new tableau we replace R

2

by 3R

2

− 2R

3

and R

4

by 3R

4

+ 2R

3

. It takes the form:

x

1

x

2

x

3

x

4

x

5

x

6

7 0 1 1 0 0 6

1 2 0 0 1 0 20

−28 3 0 −4 0 1 6

20 −2 0 3 0 0

Iteration 2:

Step 1: Not all entries in the bottom row are non-negative.

Step 2: The entering basic variable is x

2

and the second column is the pivot column.

Step 3: There are positive entries in the pivot column.

Step 4: Since 20/2 > 6/3, the pivot is 3. The leaving basic variable is x

6

.

Step 5: The new basic variables are x

2

, x

3

, and x

5

. To find the new tableau, we replace R

2

by 3R

2

− 2R

3

and R

4

by 3R

4

+ 2R

3

. It takes the form

x

1

x

2

x

3

x

4

x

5

x

6

7 0 1 1 0 0 6

59 0 0 8 3 −2 48

−28 3 0 −4 0 1 6

4 0 0 1 0 2 3

Iteration 3:

Step 1: All entries in the bottom row are non-negative.

We calculate the final basic feasible solution. We set the non-basic variables x

1

, x

4

, and x

6

equal to zero, and find x

2

= 2, x

3

= 6, and x

5

= 16. Hence, the desired maximal value of the function f is f (0, 2, 6) = 22.

References

[1] G. M. Ziegler, Lectures on polytopes, Graduate Texts in Mathematics, vol. 152, Springer- Verlag, New York, 1995.

Nagoya University, Nagoya, Japan E-mail address:larsh@math.nagoya-u.ac.jp

5

参照

関連したドキュメント

For instance, Racke & Zheng [21] show the existence and uniqueness of a global solution to the Cahn-Hilliard equation with dynamic boundary conditions, and later Pruss, Racke

Ruan; Existence and stability of traveling wave fronts in reaction advection diffusion equations with nonlocal delay, J. Ruan; Entire solutions in bistable reaction-diffusion

In section 3 all mathematical notations are stated and global in time existence results are established in the two following cases: the confined case with sharp-diffuse

After that, applying the well-known results for elliptic boundary-value problems (without parameter) in the considered domains, we receive the asymptotic formu- las of the solutions

– Solvability of the initial boundary value problem with time derivative in the conjugation condition for a second order parabolic equation in a weighted H¨older function space,

We study a Neumann boundary-value problem on the half line for a second order equation, in which the nonlinearity depends on the (unknown) Dirichlet boundary data of the solution..

[25] Nahas, J.; Ponce, G.; On the persistence properties of solutions of nonlinear dispersive equa- tions in weighted Sobolev spaces, Harmonic analysis and nonlinear

“Breuil-M´ezard conjecture and modularity lifting for potentially semistable deformations after