An
approximation
algorithm
for the
covering
0‐1
integer
program
Yotaro Takazawa
*Shinji
Mizuno
$\dagger$Abstract
Thecovering0‐1
integer
program is ageneralization
of fundamental combinatorialoptimizationproblems such as thevertex cover problem, the set cover problem, and
the minimumknapsack problem. Inthisarticle,
extending
a2‐approximationalgorithm
for the minimumknapsack problem byCarnes and Shmoys
(2015),
wepropose a\triangle_{2^{-}}approximation
algorithm,
where$\Delta$_{2}is thesecondlargest
number ofnon‐zerocoefficientsin the constraints.
1
Introduction
For a
given
minimizationproblem
having
anoptimal solution,
analgorithm
is calledan$\alpha$-approximation
algorithm
if itrunsinpolynomial
timeandproduces
afeasiblesolution whoseobjective
value isless thanorequal
to $\alpha$timestheoptimal
value. Westudy
thecovering
0‐1integer
program(CIP),
which is formulatedasfollows:CIP
\displaystyle \min\sum_{j\in N}c_{j}x_{i}
s.t.\displaystyle \sum_{j\in N}a_{ij}x_{\dot{}}\geq b_{i},\forall i\in M=\{1, \cdots , m\}
,(1)
x_{j}\in\{0
,1\},
\forall j\in N=\{1, \cdots, n\}.
where
b_{i},
a_{ij}, andc_{j}(i\in M, j\in N)
arenonnegative.
Assume that\displaystyle \sum_{j\in N}a_{ij}\geq b_{i}
for anyi\in M,sothat the
problem
isfeasible. Let\triangle_{i}
be the number ofnon‐zerocoefficients in the i‐thconstraint
\displaystyle \sum_{j\in N}a_{ij}x_{j}\geq b_{i}
. Without lossofgenerality,
we assumethat$\Delta$_{1}\geq\triangle_{2}\geq\cdots\geq$\Delta$_{m}
and
\triangle_{2}\geq 2.
CIP is a
generalization
of fundamental combinatorialoptimization
problems
suchas thevertexcover
problem,
thesetcoverproblem,
and the minimumknapsack
problem.
Therearesome
\triangle_{1}‐approximation
algorithms
forCIP,
seeKoufogiannakis
andYoung
[4]
andreferencestherein.
*
DepartmentofIndustrialEngineeringandManagement,TokyoInstituteofTechnology
In this
article,
we propose a\triangle_{2}‐approximation
algorithm
for CIP. Ouralgorithm
is anextension ofa
2‐approximation
algorithm
for the minimumknapsack problem
which is aspecial
case of CIP wherem=1by
Carnes andShmoys
[1].
Part of this article is includedinTakazawa and Mizuno
[5].
2
An
algorithm
and its
analysis
Carnes and
Shmoys
[1]
usedanLP relaxation of the minimumknapsack
problem,
whichwaspresented by
Carret al.[2].
We also usethefollowing
LP relaxation of(1):
\displaystyle \min\sum c_{j}x_{\mathrm{j}}
s.t.
\displaystyle \sum_{j\in N\backslash A}^{j\in N}a_{ij}(A)x_{j}\geq b_{i}(A)
,\forall A\subseteq N,
\forall i\in M,(2)
x_{j}\geq 0, \forall j\in N,
where
b_{ $\iota$}(A) =\displaystyle \max\{0, b_{i}-\sum_{j\in A}a_{ij}\}, \forall i\in M, \forall A\subseteq N,
(3)
a_{i\dot{}}(A) =\displaystyle \min\{a_{ij}, b_{i}(A)\}, \forall i\in M, \forall A\subseteq N, \forall j\in N\backslash A.
Carretal.
[2]
show that anyfeasible 0‐1 solution of(2)
is feasible for(1).
The dualproblem
of
(2)
canbe statedas\displaystyle \max\sum\sum b_{i}(A)y_{i}(A)
s.t.
\displaystyle \sum_{i\in M}^{i\in M}\sum_{A\subseteq N:j\not\in A}^{A\subseteq N}a_{ij}(A)y_{i}(A)\leq c_{j},
\forall j\in N
,(4)
y_{i}(A)\geq 0, \forall A\subseteq N, \forall i\in M.
Nowweintroduce awell‐known result fora
primal‐dual
pair
of linearprogramming
[3].
Lemma 1. Let\overline{x} and
\overline{y}
befeasible
solutionsfor
thefollowing
primal
and dual linearpro‐gramming
problems:
\displaystyle \min\{c^{T}x| Ax\geq b, x\geq 0\}
and\displaystyle \max\{b^{T}y|A^{T}y\leq c, y\geq 0\}.
If
the conditions(a):\displaystyle \forall j\in\{1, \cdots, n\}, \overline{x}_{j}>0\Rightarrow\sum_{i=1}^{m}a_{ij}\overline{y}_{i}=c_{j},
(
b)
:\forall i\in\{1, \cdots , m\}, \displaystyle \overline{y}_{i}>0\Rightarrow\sum_{j=1}^{n}a_{ij}\overline{x}_{j}\leq $\alpha$ b_{i}
hold,
then\overline{x} is a solution within afactor of
orof
theoptimal solution,
thatis,
theprimal
objective
valuec^{T}\overline{x}
is less than orequal
to $\alpha$ times theoptimal
value.(Note
that theprimal
problem
has anoptimal
solution because both theprimal
and dualproblems
arefeasible
By
applying
Lemma 1 tothe LPproblems
(2)
and(4),
we have thefollowing
result.Lemma2. Let x andy be
feasilbe
solutionsfor
(2)
and(4),
respectively.
If
these solutionssatisfy
(a):\displaystyle \forall j\in N, x_{j}>0\Rightarrow\sum_{i\in M}\sum_{A\subseteq N:.i\not\in A}a_{ij}(A)y_{i}(A)=c_{j},
(5)
(b):\displaystyle \forall i\in M, \forall A\subseteq N, y_{i}(A)>0\Rightarrow\sum_{j\in N\backslash A}a_{ij}(A)x_{j}\leq\triangle_{2}b(A)
,thenx is a solution withina
factor of
\triangle_{2}
of
theoptimal
solutionof
(1).
Corollary
1. Letx be afeasible
0‐1 solutionof
(2)
andy be afeasible
solutionof
(4).
If
these solutions
satisfy
(5),
x is a solution within afactor of
$\Delta$_{2}
of
theoptimal
solutionof
(1).
Our
algorithm
ispresented
inAlgorithm
1 below. Thegoal
is to find x and y whichsatisfy
theconditions inCorollary
1. Thealgorithm
generates
asequenceofpoints
x andywhich
always satisfy
thefollowing
conditions:x\in\{0, 1\}^{n}.
\bullet yisfeasible for
(4).
\mathrm{o}x and y
satisfy
(5).
In
Algorithm
1,
we usethesymbols
S=\{j\in N|x_{j}=1\},
b_{i}(S)=\displaystyle \max\{0, b_{i}-\sum_{j\in S}a_{ij}\}
fori\in M, and
\displaystyle \overline{c_{j}}=c_{j}-\sum_{i\in M}\sum_{A\subseteq N:j\not\in A}a_{ij}(A)y_{i}(A)
forj\in N.
Algorithm
1Input:
M, N,
a_{i\dot{}},b_{i}
andc_{j}(i\in M, j\in N)
.Output:
\tilde{x} andỹ.
Step
0: Setx=0,
y=0
, andS=\emptyset
. LetN\'{i}=\{j\in N|a_{ij}>0\}
fori\in M, \overline{c}_{j}=c_{j}
forj\in N
, and i=m.Step
1: If i=0,thenoutput\tilde{x}=x andỹ
=y andstop. Otherwisesetb_{i}(S)=\displaystyle \max\{0,
b_{i}-\displaystyle \sum_{j\in S}a_{ij}\}
and gotoStep
2.Step
2: Ifb_{i}(S)=0
, thenupdate
i=i-1 and gotoStep
1. 0therwise calculatea_{ij}(S)
forany
j\in N_{i}'\backslash S
by
(3).
Increasey_{l}(S)
whilemaintaining
dualfeasibility
until at leastoneconstraint
s\in N_{i}'\backslash S
istight. Namely
sety_{i}(S)=\displaystyle \frac{\overline{c}_{s}}{a_{is}(S)}
fors=\displaystyle \arg\min_{j\in N_{i}\backslash S}\{\frac{\overline{c}_{j}}{a_{ij}(S)}\}.
Update
\overline{c}_{j}=\overline{c}_{\dot{}}-a_{ij}(S)y_{i}(S)
forj\in N'\backslash S,
x_{s}=1,
S=S\cup\{s\}
, andb_{i}(S)=
Fortheoutputs \tilde{x} and
ỹ
ofAlgorithm
1,
we have thefollowing
results.Lemma 3. \tilde{x} is a
feasible
0‐1 solutionof
(2)
andỹ
is afeasible
solutionof
(4).
Proof By
theassumption
that(1)
isfeasible,
x=(1, \cdots, 1)
isfeasible for the LP relaxationproblem
(2).
Algorithm
1 starts from x=0andupdates
avariable x_{j} from 0 to 1 ateachiterationuntil eachconstraint in
(2)
issatisfied. Hence \tilde{x}is afeasible 0‐1 solution of(2).
Algorithm
1 startsfrom the dual feasible solutiony=0
and maintains dualfeasibility
throughout
thealgorithm.
Henceỹ
isfeasible for(4).
\squareLemma 4. \tilde{x} and
ỹ
satisfy
(5).
Proof.
All the conditionsin(a)
of(5)
arenaturally
satisfiedby
thewaythealgorithm updates
primal
variables. It suffices to show that all the conditions in(b)
are satisfied. For anyi\in\{2, \cdots, m\}
andanysubsetA\subseteq N
such thatỹi(A)
>0,weobtain that\displaystyle \sum_{j\in N\backslash A}a_{ij}(A)\tilde{x}_{j}\leq\triangle_{i}b_{i}(A)\leq\triangle_{2}b_{i}(A)
,since
a_{ij}(A)\leq b_{i}(A)
by
the definition(3)
andthe i‐th constraint has\triangle_{i}
non‐zerocoefficients.Then,
weconsider the caseof i=1. Define\tilde{S}=\{j\in V|\tilde{x}_{j}=1\}
. Let\tilde{x}_{\ell}
be the variablewhich becomes 1from 0 at the last iteration of
Step
2. FromStep 2, ỹl
(A)>0
implies
A\subseteq\tilde{S}\backslash \{\ell\}
.(6)
Since the
algorithm
does not stopjust
beforesetting
\tilde{x}_{\ell}=1
, wehave\displaystyle \sum_{j\in\tilde{S}\backslash \{\ell\}}a_{1j}<b_{1}
.(7)
By
(6)
and(7),
we observe that for anysubsetA\subseteq N
such thatỹl
(A)>0
\displaystyle \sum_{j\in(\tilde{S}\backslash \{\ell\})\backslash A}a_{1j}(A)\leq\sum_{j\in(\overline{S}\backslash \{l\})\backslash A}a_{1j}=\sum_{j\in\tilde{S}\backslash \{l\}}a_{1j}-\sum_{j\in A}a_{1j}<b_{1}-\sum_{j\in A}a_{1j}\leq b_{1}(A)
,where thefirst and last
inequality
follows from the definitions(3)
ofa_{j}(A)
andb_{ $\iota$}(A)
.Thus,
wehave that forany subset
A\subseteq N
such thatỹ1(A)
>0\displaystyle \sum_{j\in V\backslash A}a_{1j}(A)\tilde{x}_{j}=\sum_{j\in\overline{S}\backslash A}a_{1j}(A)=\sum_{j\in(\tilde{S}\backslash \{l\})\backslash A}a_{1j}(A)+a_{1l}(A)\leq\triangle_{2}b_{1}(A)
,where the last
inequality
follows froma_{1\ell}(A)\leq b_{1}(A)
and\triangle_{2}\geq 2.
\squareLemma 5. The
running
timeof Algorithm
2 isO(\triangle_{1}(m+n
Proof.
Therunning
timeofoneiterationofStep
1 isO(\triangle_{1})
and the number of iterations inStep
1 isat mostm. On the otherhand,
therunning
timeofoneiterationofStep
2 isO(\triangle_{1})
and the number of iterations in
Step
2 isat most m+n. Therefore the totalrunning
timeofthe
algorithm
isO(\triangle_{1}m)+O($\Delta$_{1}(m+n))=O(\triangle_{1}(m+n
\squareFkom the results
above,
we canobtain thenexttheorem.3
Conclusion
The
covering
0‐1integer
program(CIP)
is ageneralization
of fundamental combinatorialoptimization
problems.
There aresome\triangle_{1}‐approximation
algorithms
forCIP,
where$\Delta$_{1}
isthe
largest
number ofnon‐zero coefficients in the constraints. In thisarticle,
we extend a2‐approximation algorithm
for the minimumknapsack problem by
Carnes andShmoys
[1]
toCIP andpropsea
$\Delta$_{2}‐approximation
algorithm,
where the secondlargest
numberofnon‐zerocoefficientsinthe constraints.
Acknowledgment
This research is
supported
in partby
Grant‐in‐Aid for Science Research(A)
26242027 ofJapan
Society
for the Promotion of Science.References
[1]
T. Carnes and D.Shmoys:
Primal‐dual schema forcapacitated
covering
problems,
Math‐ematical
Programming,
153(2015),
289‐308.[2]
R. D.Carr,
L.Fleischer,
V. J.Leung
and C. A.Phillips: Strengthening integrality
gapsfor
capacitated
networkdesign
andcovering
problems, Proceedings of
the 11th AnnualACM‐SIAM
Symposium
on DiscreteAlgorithms
(2000),
106‐115.[3]
D.Du,
K. Ko and X. Hu:Design
andAnalysis
ofApproximation Algorithms,
(Springer
optimization
and ItsApplications,
2011),
297‐303.[4]
C.Koufogiannakis
and N.E.Young:
Greedy
$\delta$‐approximation
algorithm
forcovering
witharbitrary
constraintsand submodularcost,Algorithmica,
66(2013),
113‐152.[5]
Y. Takazawa and S. Mizuno:A2‐approximation algorithm
for the minimumknapsack
problem
with aforcing graph,
to appear Journal ofOperations
Research Research ofJapan
(2017).
YotaroTakazawa
Department
of IndustrialEngineering
andManagement
Tokyo
Institute ofTechnology
2‐12‐1