Randomized Algorithms (Spring 2010)/Randomized approximation algorithms: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
imported>WikiSysop
No edit summary
 
(50 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== Approximation Algorithms ==
== Approximation Algorithms ==
An optimization problem consists of:
* A set of valid '''instances''' (inputs).
* Each instance <math>I</math> defines a set of '''feasible solutions''' <math>\mathcal{F}_I</math>.
* Each instance <math>I</math> also defines an '''objective function''' <math>\mathrm{obj}_I:\mathcal{F}_I\rightarrow \mathbb{R}</math>.
* The problem is specified to be either a '''minimization''' problem or a '''maximization''' problem. Given an instance <math>I</math>, our goal is to find a feasible solution <math>x\in\mathcal{F}_I</math>with the minimum (or maximum) value of <math>\mathrm{obj}_I(x)</math>.
{{Theorem|Example: the <math>s</math>-<math>t</math> shortest path problem|
* Each instance is a (nonnegatively)weighted directed graph <math>G(V,E,w)</math> along with two distinct vertices <math>s,t\in V</math>.
* Given an instance, the feasible solutions are the simple paths from <math>s</math> to <math>t</math> in <math>G</math>.
* The objective function is the length of the path, defined by the total weight of the path.
* The problem is a minimization problem.
}}
An optimization problem can be transformed to a decision problem by "thresholding". For example, the optimization problem: "Given an instance <math>I</math>, find the <math>x\in\mathcal{F}_I</math> with minimum <math>\mathrm{obj}_I(x)</math>." can be transformed to a (parameterized) decision problem: "Given an instance <math>(I,T)</math>, determine whether there exists an <math>x\in\mathcal{F}_I</math> with <math>\mathrm{obj}_I(x)<T</math>."
Given an optimization problem, we can certainly solve its decision version by solving the optimization problem itself. The converse is also true: we can find the optimum by making queries to the decision version. Through binary search, this can be done efficiently.


=== Coping with the NP-hardness ===
=== Coping with the NP-hardness ===
An optimization problem is an '''NP-hard''' optimization problem if its decision version is '''NP-hard'''. Many interesting optimization problems are '''NP-hard'''. There are basically two ways to deal with the hardness.
;Heuristics
:We just apply some heuristic ideas to search for the optimal solution. This might work pretty good on some natural instances. On the other hand, giving a theoretical explanation why the heuristic method performs well on your input instance is usually very hard.
;Approximation algorithms
:We design an algorithm with '''''guaranteed'''''  worst-case running time, which approximates the optimum within a '''''guaranteed''''' approximation ratio.
Let <math>\mathrm{OPT}_I</math> denote the optimum of the instance <math>I</math>.
For a minimization problem, for any instance <math>I</math>, the algorithm returns a feasible solution <math>x\in\mathcal{F}_I</math>, such that <math>\mathrm{obj}_I(x)\le \alpha\cdot\mathrm{OPT}_I</math>, where <math>\alpha>1</math>.
For a maximization problem, for any instance <math>I</math>, the algorithm returns a feasible solution <math>x\in\mathcal{F}_I</math>, such that <math>\mathrm{obj}_I(x)\ge \alpha\cdot\mathrm{OPT}_I</math>, where <math>0<\alpha<1</math>.
Such an algorithm is called an '''<math>\alpha</math>-approximation algorithm''' for the optimization problem. The ratio <math>\alpha</math> is called the '''approximation ratio'''.
We are particularly interested in the approximation algorithms running in poly-time of the input size, for the '''NP-hard''' optimization problems.


=== Combinatorial approximation algorithms ===
=== Combinatorial approximation algorithms ===
Classic approximation algorithms are mostly combinatorial algorithms, which uses standard algorithm design techniques (e.g., greedy, dynamic programming, etc.). In addition to the time complexity, we have to bound the approximation ratio.
We will introduce two examples, the minimum makespan scheduling, and the knapsack problem.
==== Scheduling ====
Scheduling is a class of problems. We consider a central problem in scheduling theory: the '''minimum makespan scheduling'''.
{{Theorem
|Problem (Minimum makespan scheduling)|
:Given processing times for <math>n</math> jobs, <math>p_1,p_2,\ldots,p_n</math>, and an integer <math>m</math>, find an assignment of the jobs to <math>m</math> identical machines so that the completion time, also called the '''makespan''', is minimum.
:Formally, given as input a sequence <math>p_1,p_2,\ldots,p_n</math> and an integer <math>m</math>, find an <math>h:[n]\rightarrow[m]</math> such that <math>\max_{i\in[m]}\sum_{j\in h^{-1}(i)}p_j</math> is minimized.
}}
This problem is known to be '''NP-hard'''. We consider a very simple and natural algorithm:
*Schedule the jobs one by one, in an arbitrary order, each job being assigned to a machine with least amount of work so far.
This algorithms is due to Graham (1966), who also shows that the algorithm approximates the optimal makespan within a factor of 2.
Let OPT be the optimal makespan. We have two observations:
* <math>\mathrm{OPT}\ge\frac{1}{m}\sum_{j=1}^n p_j</math>, i.e. the optimal makespan is no smaller than the average processing time.
* <math>\mathrm{OPT}\ge \max_{1\le j\le n}p_j</math>, i.e. the optimal makespan is no smaller than the largest processing time.
'''Proof of the approximation factor''': We assume that the last job scheduled is <math>p_n</math>, assigned to machine <math>i</math>, and the makespan produced by the algorithm is <math>M</math>. Since the algorithm assigns a job to the least loaded machine, right before <math>p_n</math> is scheduled, machine <math>i</math> whose current load is <math>M-p_n</math>, must be the least loaded. Thus, <math>M-p_n</math> is no greater than the average processing time. Then,
:<math>M-p_n\le \frac{1}{m}\sum_{j=1}^n p_j\le \mathrm{OPT}</math>.
And since <math>p_n\le\max_{1\le j\le n}p_j\le \mathrm{OPT}</math>, it holds that <math>M=M-p_n+p_n\le 2\cdot\mathrm{OPT}</math>.
<math>\square</math>
There are poly-time algorithm using different ideas with much better approximation ratio for this problem.
==== Knapsack ====
{{Theorem
|Problem (Knapsack)|
:Given a set <math>S=\{a_1,a_2,\ldots,a_n\}</math> of objects, each with specified weight <math>w_i\in\mathbb{Z}^+</math> and profit <math>p_i\in\mathbb{Z}^+</math>, and a knapsack capacity <math>B</math>, find a subset of objects whose total weight is bounded by <math>B</math> and total profits is maximized.
}}
A heuristic for the problem is to sort the objects by decreasing ratio of profit to weight, and then greedily pick objects in this order. However, the ratio between the solution of this heuristic and the OPT can be made arbitrarily bad. This is left as an exercise.
; A pseudo-polynomial time algorithm by dynamic programming
Let <math>P=\sum_{i=1}^n p_i</math> be the largest possible total profit. We first minimize the total weight for each possible total profit value <math>p</math>.
For each <math>i\in\{1,2,\ldots,n\}</math> and <math>p\in\{1,2,\ldots,P\}</math>, let <math>S_{i,p}\subseteq\{a_1,a_2,\ldots,a_i\}</math> denote a subset of the first <math>i</math> objects whose total profit is exactly <math>p</math> and whose total weight is minimized.
Let <math>A(i,p)</math> be the total weight of the set <math>S_i,p</math> (<math>A(i,p)=\infty</math> if no such set exists). Clearly <math>A(1,p)</math> is known for every <math>p</math>. The following recurrence holds:
:<math>
\begin{align}
A(1,p) &=
\min\{w_i\mid p_i=p\};\\
A(i+1,p) &=
\begin{cases}
\min\{A(i,p),w_{i+1}+A(i,p-p_{i+1})\} & \mbox{if }p_{i+1}\le p,\\
A(i,p) & \mbox{otherwise}.
\end{cases}
\end{align}
</math>
By [http://en.wikipedia.org/wiki/Dynamic_programming '''dynamic programming'''], we can compute all values <math>A(i,p)</math> by constructing the <math>n\times P</math> matrix <math>A(\cdot,\cdot)</math> in an appropriate order. This takes totally <math>O(n P)</math> time.
The optimal profit is given by <math>\max\{p\mid A(i,p)\le B\}</math>.
Did we just solve an '''NP-hard''' problem in polynomial time, and prove that '''P'''='''NP'''?
Actually, no, because of the factor <math>P</math> in the time complexity. A poly-time algorithm, by definition, is that the worst-case running time is bounded with a polynomial of the size of the input, where the input is represented in binary.
The binary representation of the input of the knapsack problem only takes about <math>O(\sum_{i=1}^n\log p_i)</math> size, which means that the above algorithm is not poly-time. However, if we represent the input in the '''unary (一进制)''' form, it takes <math>\Omega(P+n)</math> unary bits to represent the inputs. With unary representation of the input, our algorithm is poly-time. An algorithm with this property is called a '''[http://en.wikipedia.org/wiki/Pseudo-polynomial_time pseudo-polynomial time]''' algorithm.
; An FPTAS by by "rounding and scaling"
We notice that the above dynamic programming algorithm is poly-time if the total profit <math>p</math> of sets of objects only has <math>\mathrm{poly}(n)</math> many possible values, and thus the matrix <math>A</math> filled by the dynamic programming, is only polynomially large.
This leads us to an approximation algorithm by approximating all the profits to nearby lattice points.
The idea is informally called the "''rounding and scaling''" technique:
{{Theorem
|Algorithm|
# Given <math>\epsilon>0</math>, let <math>K=\frac{\epsilon p_\max}{n}</math>, where <math>p_{\max}=\max_{1\le i\le n}p_i</math>.
# For each object <math>a_i</math>, let <math>p_i'=\left\lfloor\frac{p_i}{K}\right\rfloor</math>.
# With these new profits of objects, using the dynamic programming algorithm, find the most profit set, say <math>S</math>.
# Return <math>S</math>.
}}
The following lemma bounds the approximation ratio of the algorithm. The algorithm takes as input an instance of the knapsack problem and a parameter <math>0<\epsilon<1</math>, and returns a knapset solution with approximation ratio of <math>(1-\epsilon)</math> (it is a maximization problem, so the approximation ratio is less than 1).
{{Theorem
|Lemma|
:Let <math>S</math> be the output of the algorithm, and <math>p(S)=\sum_{i\in S}p_i</math> be the total profit of the objects in <math>S</math>. Then
::<math>p(S)\ge (1-\epsilon)\cdot\mathrm{OPT}</math>.
}}
{{Proof| Let <math>O</math> be the optimal set. For any object <math>a_i</math>, because of rounding down as <math>p'_i=\left\lfloor\frac{p_i}{K}\right\rfloor</math>, it holds that
:<math>\frac{p_i}{K}-1\le p'_i\le\frac{p_i}{K}</math>, and equivalently <math>p_i-K\le K\cdot p'_i\le p_i</math>.
Therefore,
:<math>p(O)-K\cdot p'(O)\le n K</math>.
The dynamic programming algorithm returns the optimal solution for the new profits <math>p'</math>. Therefore,
:<math>\begin{align}
p(S)
&\ge K\cdot p'(O) \\
&\ge p(O)-nK \\
&=\mathrm{OPT}-\epsilon p_\max &&\quad (K=\frac{\epsilon p_\max}{n})\\
&\ge (1-\epsilon)\cdot\mathrm{OPT}. &&\quad (\mathrm{OPT}\ge p_\max)
\end{align}
</math>
}}
The running time of the approximation algorithm depends on the size of the matrix <math>A</math> of dynamic programming, which is
:<math>O\left(n\sum_{i=1}^n\left\lfloor\frac{p_i}{K}\right\rfloor\right)
=O\left(n^2\left\lfloor\frac{p_\max}{K}\right\rfloor\right)
=O\left(n^2\left\lfloor\frac{n}{\epsilon}\right\rfloor\right)=O(n^3\epsilon^{-1})</math>.
Therefore, the algorithm finds a solution with a total profit at least <math>(1-\epsilon)\mathrm{OPT}</math> in time polynomial of <math>n</math> and <math>\frac{1}{\epsilon}</math>, for any <math>0<\epsilon<1</math>. This is called a [http://en.wikipedia.org/wiki/Fptas '''fully polynomial-time approximation scheme (FPTAS)''']. This is the best we can expect from a poly-time algorithm assuming that '''P'''<math>\neq</math>'''NP'''.


=== LP-based approximation algorithms ===
=== LP-based approximation algorithms ===
A more advanced technique for designing approximation algorithms is linear programming. Many combinatorial optimization problems can be stated as integer programs (IP). The integer programs can be '''relaxed''' to linear programming problems. A feasible solution to the '''LP-relaxation''' is a '''fractional solution''' to the original combinatorial problem.
However, in the case of an '''NP-hard''' problem, we cannot expect the LP-relaxation has the same optimal solution as the original integer program (or otherwise a poly-time algorithm would solve an '''NP-hard''' problem). Thus, our task is not to look for an optimal solution to the LP-relaxation, but rather an integral solution which approximate the optimum within some bounded ratio.
There are two fundamental techniques for designing approximation algorithms using linear programming:
* The first is to solve the LP-relaxation and convert the fractional optimal solution to an integral solution, with bounded approximation ratio to the optimal integral solution. This technique is called '''rounding'''.
* The second is less obvious but with deeper insight, which uses the idea of '''duality'''. This technique is called the '''primal-dual schema'''.
We will focus on the rounding technique and demonstrate the power of randomized rounding. For other uses of LP for designing approximation algorithms, please refer to the classic textbook "''Approximation Algorithms''" by Vazirani.


== Randomized Rounding ==
== Randomized Rounding ==


=== The integrality gap ===
=== The integrality gap ===
Given an '''LP-relaxation''' for a minimization problem, let <math>\mathrm{OPT}(I)</math> be the cost of an optimal solution to instance <math>I</math>, and
let <math>\mathrm{OPT}_f(I)</math> denote the cost of an optimal '''fractional''' solution to instance <math>I</math>, i.e., the value of the objective function of an optimal solution to the LP-relaxation. The '''integrality gap''' is defined to be
:<math>
\sup_{I}\frac{\mathrm{OPT}(I)}{\mathrm{OPT}_f(I)}.
</math>


=== Rounding LPs ===
The integrality gap for a maximization problem can be defined similarly.


=== Max-SAT ===
=== Max-SAT ===
We consider the Max-SAT problem:
{{Theorem
|Problem (Max-SAT)|
:Given a conjunctive normal form (CNF) formula of <math>m</math> clauses defined on <math>n</math> boolean variables <math>x_1,x_2,\ldots,x_n</math>, find a truth assignment to the boolean variables that maximizes the number of satisfied clauses.
}}
and obtain two approximation algorithms by two different techniques.
==== The probabilistic method ====
A straightforward way to solve Max-SAT is to uniformly and independently assign each variable a random truth assignment. We have learned in the lecture about the probabilistic method that this "algorithm" has fine expected performance. Specifically, a clause <math>C_j</math> of <math>k</math> distinct variables is unsatisfied only if every one of the <math>k</math> variables has the wrong "guess". The probability of this bad event is <math>2^{-k}</math>, thus the clause <math>C_j</math> is satisfied with probability <math>1-2^{-k}\ge \frac{1}{2}</math>. Due to the linearity of expectation, expectedly at least <math>\frac{m}{2}\ge \frac{1}{2}\cdot\mathrm{OPT}</math> clauses are satisfied.
We will see that by randomized rounding an LP, we can have a better approximation ratio.
==== LP relaxation + randomized rounding ====
For a clause <math>C_j</math>, let <math>C_i^+</math> be the set of indices of the variables that appear in the uncomplemented form in clause <math>C_j</math>, and let <math>C_i^-</math> be the set of indices of the variables that appear in the complemented form in clause <math>C_j</math>. The Max-SAT problem can be formulated as the following integer linear programing.
:<math>
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&\qquad\qquad y_i \in\{0,1\}, &&\forall 1\le i\le n \\
&\qquad\qquad z_j \in\{0,1\}, &&\forall 1\le j\le m
\end{align}
</math>
Each <math>y_i</math> in the programing indicates the truth assignment to the variable <math>x_i</math>, and each <math>z_j</math> indicates whether the claus <math>C_j</math> is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.
The integer linear programming is relaxed to the following linear programming:
:<math>
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&\qquad\qquad 0\le y_i\le 1, &&\forall 1\le i\le n \\
&\qquad\qquad 0\le z_j\le 1, &&\forall 1\le j\le m
\end{align}
</math>
Let <math>\hat{y}_i</math> and <math>\hat{z}_j</math> be the fractional optimal solutions to the above linear programming. Clearly, <math>\sum_{j=1}^m\hat{z}_j</math> is an upper bound on the optimal number of satisfied clauses.
Apply a very natural randomized rounding scheme. For each <math>1\le i\le n</math>, independently
:<math>y_i
=\begin{cases}
1 & \mbox{with probability }\hat{y}_i.\\
0 & \mbox{with probability }1-\hat{y}_i.
\end{cases}
</math>
Correspondingly, each <math>x_i</math> is assigned to <tt>TRUE</tt> independently with probability <math>\hat{y}_i</math>.
{{Theorem
|Lemma|
: Let <math>C_j</math> be a clause with <math>k</math> literals. The probability that it is satisfied by randomized rounding is at least
::<math>(1-(1-1/k)^k)\hat{z}_j</math>.
}}
{{Proof| Without loss of generality, we assume that all <math>k</math> variables appear in <math>C_j</math> in the uncomplemented form, and we assume that
:<math>C_j=x_1\vee x_2\vee\cdots\vee x_k</math>.
The complemented cases are symmetric.
Clause <math>C_j</math> remains unsatisfied by randomized rounding only if every one of <math>x_i</math>, <math>1\le i\le k</math>, is assigned to <tt>FALSE</tt>, which corresponds to that every one of <math>y_i</math>, <math>1\le i\le k</math>, is rounded to 0. This event occurs with probability <math>\prod_{i=1}^k(1-\hat{y}_i)</math>. Therefore, the clause <math>C_j</math> is satisfied by the randomized rounding with probability
:<math>1-\prod_{i=1}^k(1-\hat{y}_i)</math>.
By the linear programming constraints,
:<math>\hat{y}_1+\hat{y}_2+\cdots+\hat{y}_k\ge\hat{z}_j</math>.
Then the value of <math>1-\prod_{i=1}^k(1-\hat{y}_i)</math> is minimized when all <math>\hat{y}_i</math> are equal and <math>\hat{y}_i=\hat{z}_j/k</math>. Thus, the probability that <math>C_j</math> is satisfied is
:<math>1-\prod_{i=1}^k(1-\hat{y}_i)\ge 1-(1-\hat{z}_j/k)^k\ge (1-(1-1/k)^k)\hat{z}_j</math>,
where the last inequality is due to the concaveness of the function <math>1-(1-\hat{z}_j/k)^k</math> of variable <math>\hat{z}_j</math>.
}}
For any <math>k\ge 1</math>, it holds that <math>1-(1-1/k)^k>1-1/e</math>. Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least
:<math>(1-1/e)\sum_{j=1}\hat{z}_j\ge (1-1/e)\cdot\mathrm{OPT}</math>.
The inequality is due to the fact that <math>\hat{z}_j</math> are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.
==== Let the two algorithms compete with each other ====
For any instance of the Max-SAT, let <math>m_1</math> be the expected number of satisfied clauses when each variable is independently set to <tt>TRUE</tt> with probability <math>\frac{1}{2}</math>; and let <math>m_2</math> be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.


=== Covering and Packing ===
We will show that on any instance  of the Max-SAT, one of the two algorithms is a <math>\frac{3}{4}</math>-approximation algorithm.
{{Theorem
|Theorem|
:<math>\max\{m_1,m_2\}\ge\frac{3}{4}\cdot\mathrm{OPT}.</math>
}}
{{Proof|It suffices to show that <math>\frac{(m_1+m_2)}{2}\ge\frac{3}{4}\sum_{j=1}^m\hat{z}_j</math>. Letting <math>S_k</math> denote the set of clauses that contain <math>k</math> literals, we know that
:<math>m_1=\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\hat{z}_j.</math>
By the analysis of randomized rounding,
:<math>m_2\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-(1-1/k)^k)\hat{z}_j.</math>
Thus
:<math>\frac{(m_1+m_2)}{2}\ge \sum_{k=1}^n\sum_{C_j\in S_k}
\frac{1-2^{-k}+1-(1-1/k)^k}{2}\hat{z}_j.</math>
An easy calculation shows that <math>\frac{1-2^{-k}+1-(1-1/k)^k}{2}\ge\frac{3}{4}</math> for any <math>k</math>, so that we have
:<math>\frac{(m_1+m_2)}{2}\ge \frac{3}{4}\sum_{k=1}^n\sum_{C_j\in S_k}\hat{z}_j=\frac{3}{4}\sum_{j=1}^m\hat{z}_j\ge \frac{3}{4}\cdot\mathrm{OPT}.</math>
}}

Latest revision as of 05:14, 11 June 2010

Approximation Algorithms

An optimization problem consists of:

  • A set of valid instances (inputs).
  • Each instance [math]\displaystyle{ I }[/math] defines a set of feasible solutions [math]\displaystyle{ \mathcal{F}_I }[/math].
  • Each instance [math]\displaystyle{ I }[/math] also defines an objective function [math]\displaystyle{ \mathrm{obj}_I:\mathcal{F}_I\rightarrow \mathbb{R} }[/math].
  • The problem is specified to be either a minimization problem or a maximization problem. Given an instance [math]\displaystyle{ I }[/math], our goal is to find a feasible solution [math]\displaystyle{ x\in\mathcal{F}_I }[/math]with the minimum (or maximum) value of [math]\displaystyle{ \mathrm{obj}_I(x) }[/math].
Example: the [math]\displaystyle{ s }[/math]-[math]\displaystyle{ t }[/math] shortest path problem
  • Each instance is a (nonnegatively)weighted directed graph [math]\displaystyle{ G(V,E,w) }[/math] along with two distinct vertices [math]\displaystyle{ s,t\in V }[/math].
  • Given an instance, the feasible solutions are the simple paths from [math]\displaystyle{ s }[/math] to [math]\displaystyle{ t }[/math] in [math]\displaystyle{ G }[/math].
  • The objective function is the length of the path, defined by the total weight of the path.
  • The problem is a minimization problem.

An optimization problem can be transformed to a decision problem by "thresholding". For example, the optimization problem: "Given an instance [math]\displaystyle{ I }[/math], find the [math]\displaystyle{ x\in\mathcal{F}_I }[/math] with minimum [math]\displaystyle{ \mathrm{obj}_I(x) }[/math]." can be transformed to a (parameterized) decision problem: "Given an instance [math]\displaystyle{ (I,T) }[/math], determine whether there exists an [math]\displaystyle{ x\in\mathcal{F}_I }[/math] with [math]\displaystyle{ \mathrm{obj}_I(x)\lt T }[/math]."

Given an optimization problem, we can certainly solve its decision version by solving the optimization problem itself. The converse is also true: we can find the optimum by making queries to the decision version. Through binary search, this can be done efficiently.

Coping with the NP-hardness

An optimization problem is an NP-hard optimization problem if its decision version is NP-hard. Many interesting optimization problems are NP-hard. There are basically two ways to deal with the hardness.

Heuristics
We just apply some heuristic ideas to search for the optimal solution. This might work pretty good on some natural instances. On the other hand, giving a theoretical explanation why the heuristic method performs well on your input instance is usually very hard.
Approximation algorithms
We design an algorithm with guaranteed worst-case running time, which approximates the optimum within a guaranteed approximation ratio.

Let [math]\displaystyle{ \mathrm{OPT}_I }[/math] denote the optimum of the instance [math]\displaystyle{ I }[/math].

For a minimization problem, for any instance [math]\displaystyle{ I }[/math], the algorithm returns a feasible solution [math]\displaystyle{ x\in\mathcal{F}_I }[/math], such that [math]\displaystyle{ \mathrm{obj}_I(x)\le \alpha\cdot\mathrm{OPT}_I }[/math], where [math]\displaystyle{ \alpha\gt 1 }[/math].

For a maximization problem, for any instance [math]\displaystyle{ I }[/math], the algorithm returns a feasible solution [math]\displaystyle{ x\in\mathcal{F}_I }[/math], such that [math]\displaystyle{ \mathrm{obj}_I(x)\ge \alpha\cdot\mathrm{OPT}_I }[/math], where [math]\displaystyle{ 0\lt \alpha\lt 1 }[/math].

Such an algorithm is called an [math]\displaystyle{ \alpha }[/math]-approximation algorithm for the optimization problem. The ratio [math]\displaystyle{ \alpha }[/math] is called the approximation ratio.

We are particularly interested in the approximation algorithms running in poly-time of the input size, for the NP-hard optimization problems.

Combinatorial approximation algorithms

Classic approximation algorithms are mostly combinatorial algorithms, which uses standard algorithm design techniques (e.g., greedy, dynamic programming, etc.). In addition to the time complexity, we have to bound the approximation ratio.

We will introduce two examples, the minimum makespan scheduling, and the knapsack problem.

Scheduling

Scheduling is a class of problems. We consider a central problem in scheduling theory: the minimum makespan scheduling.

Problem (Minimum makespan scheduling)
Given processing times for [math]\displaystyle{ n }[/math] jobs, [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math], and an integer [math]\displaystyle{ m }[/math], find an assignment of the jobs to [math]\displaystyle{ m }[/math] identical machines so that the completion time, also called the makespan, is minimum.
Formally, given as input a sequence [math]\displaystyle{ p_1,p_2,\ldots,p_n }[/math] and an integer [math]\displaystyle{ m }[/math], find an [math]\displaystyle{ h:[n]\rightarrow[m] }[/math] such that [math]\displaystyle{ \max_{i\in[m]}\sum_{j\in h^{-1}(i)}p_j }[/math] is minimized.

This problem is known to be NP-hard. We consider a very simple and natural algorithm:

  • Schedule the jobs one by one, in an arbitrary order, each job being assigned to a machine with least amount of work so far.

This algorithms is due to Graham (1966), who also shows that the algorithm approximates the optimal makespan within a factor of 2.

Let OPT be the optimal makespan. We have two observations:

  • [math]\displaystyle{ \mathrm{OPT}\ge\frac{1}{m}\sum_{j=1}^n p_j }[/math], i.e. the optimal makespan is no smaller than the average processing time.
  • [math]\displaystyle{ \mathrm{OPT}\ge \max_{1\le j\le n}p_j }[/math], i.e. the optimal makespan is no smaller than the largest processing time.

Proof of the approximation factor: We assume that the last job scheduled is [math]\displaystyle{ p_n }[/math], assigned to machine [math]\displaystyle{ i }[/math], and the makespan produced by the algorithm is [math]\displaystyle{ M }[/math]. Since the algorithm assigns a job to the least loaded machine, right before [math]\displaystyle{ p_n }[/math] is scheduled, machine [math]\displaystyle{ i }[/math] whose current load is [math]\displaystyle{ M-p_n }[/math], must be the least loaded. Thus, [math]\displaystyle{ M-p_n }[/math] is no greater than the average processing time. Then,

[math]\displaystyle{ M-p_n\le \frac{1}{m}\sum_{j=1}^n p_j\le \mathrm{OPT} }[/math].

And since [math]\displaystyle{ p_n\le\max_{1\le j\le n}p_j\le \mathrm{OPT} }[/math], it holds that [math]\displaystyle{ M=M-p_n+p_n\le 2\cdot\mathrm{OPT} }[/math].

[math]\displaystyle{ \square }[/math]

There are poly-time algorithm using different ideas with much better approximation ratio for this problem.

Knapsack

Problem (Knapsack)
Given a set [math]\displaystyle{ S=\{a_1,a_2,\ldots,a_n\} }[/math] of objects, each with specified weight [math]\displaystyle{ w_i\in\mathbb{Z}^+ }[/math] and profit [math]\displaystyle{ p_i\in\mathbb{Z}^+ }[/math], and a knapsack capacity [math]\displaystyle{ B }[/math], find a subset of objects whose total weight is bounded by [math]\displaystyle{ B }[/math] and total profits is maximized.

A heuristic for the problem is to sort the objects by decreasing ratio of profit to weight, and then greedily pick objects in this order. However, the ratio between the solution of this heuristic and the OPT can be made arbitrarily bad. This is left as an exercise.

A pseudo-polynomial time algorithm by dynamic programming

Let [math]\displaystyle{ P=\sum_{i=1}^n p_i }[/math] be the largest possible total profit. We first minimize the total weight for each possible total profit value [math]\displaystyle{ p }[/math].

For each [math]\displaystyle{ i\in\{1,2,\ldots,n\} }[/math] and [math]\displaystyle{ p\in\{1,2,\ldots,P\} }[/math], let [math]\displaystyle{ S_{i,p}\subseteq\{a_1,a_2,\ldots,a_i\} }[/math] denote a subset of the first [math]\displaystyle{ i }[/math] objects whose total profit is exactly [math]\displaystyle{ p }[/math] and whose total weight is minimized.

Let [math]\displaystyle{ A(i,p) }[/math] be the total weight of the set [math]\displaystyle{ S_i,p }[/math] ([math]\displaystyle{ A(i,p)=\infty }[/math] if no such set exists). Clearly [math]\displaystyle{ A(1,p) }[/math] is known for every [math]\displaystyle{ p }[/math]. The following recurrence holds:

[math]\displaystyle{ \begin{align} A(1,p) &= \min\{w_i\mid p_i=p\};\\ A(i+1,p) &= \begin{cases} \min\{A(i,p),w_{i+1}+A(i,p-p_{i+1})\} & \mbox{if }p_{i+1}\le p,\\ A(i,p) & \mbox{otherwise}. \end{cases} \end{align} }[/math]

By dynamic programming, we can compute all values [math]\displaystyle{ A(i,p) }[/math] by constructing the [math]\displaystyle{ n\times P }[/math] matrix [math]\displaystyle{ A(\cdot,\cdot) }[/math] in an appropriate order. This takes totally [math]\displaystyle{ O(n P) }[/math] time.

The optimal profit is given by [math]\displaystyle{ \max\{p\mid A(i,p)\le B\} }[/math].

Did we just solve an NP-hard problem in polynomial time, and prove that P=NP?

Actually, no, because of the factor [math]\displaystyle{ P }[/math] in the time complexity. A poly-time algorithm, by definition, is that the worst-case running time is bounded with a polynomial of the size of the input, where the input is represented in binary.

The binary representation of the input of the knapsack problem only takes about [math]\displaystyle{ O(\sum_{i=1}^n\log p_i) }[/math] size, which means that the above algorithm is not poly-time. However, if we represent the input in the unary (一进制) form, it takes [math]\displaystyle{ \Omega(P+n) }[/math] unary bits to represent the inputs. With unary representation of the input, our algorithm is poly-time. An algorithm with this property is called a pseudo-polynomial time algorithm.

An FPTAS by by "rounding and scaling"

We notice that the above dynamic programming algorithm is poly-time if the total profit [math]\displaystyle{ p }[/math] of sets of objects only has [math]\displaystyle{ \mathrm{poly}(n) }[/math] many possible values, and thus the matrix [math]\displaystyle{ A }[/math] filled by the dynamic programming, is only polynomially large.

This leads us to an approximation algorithm by approximating all the profits to nearby lattice points. The idea is informally called the "rounding and scaling" technique:

Algorithm
  1. Given [math]\displaystyle{ \epsilon\gt 0 }[/math], let [math]\displaystyle{ K=\frac{\epsilon p_\max}{n} }[/math], where [math]\displaystyle{ p_{\max}=\max_{1\le i\le n}p_i }[/math].
  2. For each object [math]\displaystyle{ a_i }[/math], let [math]\displaystyle{ p_i'=\left\lfloor\frac{p_i}{K}\right\rfloor }[/math].
  3. With these new profits of objects, using the dynamic programming algorithm, find the most profit set, say [math]\displaystyle{ S }[/math].
  4. Return [math]\displaystyle{ S }[/math].

The following lemma bounds the approximation ratio of the algorithm. The algorithm takes as input an instance of the knapsack problem and a parameter [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math], and returns a knapset solution with approximation ratio of [math]\displaystyle{ (1-\epsilon) }[/math] (it is a maximization problem, so the approximation ratio is less than 1).

Lemma
Let [math]\displaystyle{ S }[/math] be the output of the algorithm, and [math]\displaystyle{ p(S)=\sum_{i\in S}p_i }[/math] be the total profit of the objects in [math]\displaystyle{ S }[/math]. Then
[math]\displaystyle{ p(S)\ge (1-\epsilon)\cdot\mathrm{OPT} }[/math].
Proof.
Let [math]\displaystyle{ O }[/math] be the optimal set. For any object [math]\displaystyle{ a_i }[/math], because of rounding down as [math]\displaystyle{ p'_i=\left\lfloor\frac{p_i}{K}\right\rfloor }[/math], it holds that
[math]\displaystyle{ \frac{p_i}{K}-1\le p'_i\le\frac{p_i}{K} }[/math], and equivalently [math]\displaystyle{ p_i-K\le K\cdot p'_i\le p_i }[/math].

Therefore,

[math]\displaystyle{ p(O)-K\cdot p'(O)\le n K }[/math].

The dynamic programming algorithm returns the optimal solution for the new profits [math]\displaystyle{ p' }[/math]. Therefore,

[math]\displaystyle{ \begin{align} p(S) &\ge K\cdot p'(O) \\ &\ge p(O)-nK \\ &=\mathrm{OPT}-\epsilon p_\max &&\quad (K=\frac{\epsilon p_\max}{n})\\ &\ge (1-\epsilon)\cdot\mathrm{OPT}. &&\quad (\mathrm{OPT}\ge p_\max) \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

The running time of the approximation algorithm depends on the size of the matrix [math]\displaystyle{ A }[/math] of dynamic programming, which is

[math]\displaystyle{ O\left(n\sum_{i=1}^n\left\lfloor\frac{p_i}{K}\right\rfloor\right) =O\left(n^2\left\lfloor\frac{p_\max}{K}\right\rfloor\right) =O\left(n^2\left\lfloor\frac{n}{\epsilon}\right\rfloor\right)=O(n^3\epsilon^{-1}) }[/math].

Therefore, the algorithm finds a solution with a total profit at least [math]\displaystyle{ (1-\epsilon)\mathrm{OPT} }[/math] in time polynomial of [math]\displaystyle{ n }[/math] and [math]\displaystyle{ \frac{1}{\epsilon} }[/math], for any [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math]. This is called a fully polynomial-time approximation scheme (FPTAS). This is the best we can expect from a poly-time algorithm assuming that P[math]\displaystyle{ \neq }[/math]NP.

LP-based approximation algorithms

A more advanced technique for designing approximation algorithms is linear programming. Many combinatorial optimization problems can be stated as integer programs (IP). The integer programs can be relaxed to linear programming problems. A feasible solution to the LP-relaxation is a fractional solution to the original combinatorial problem.

However, in the case of an NP-hard problem, we cannot expect the LP-relaxation has the same optimal solution as the original integer program (or otherwise a poly-time algorithm would solve an NP-hard problem). Thus, our task is not to look for an optimal solution to the LP-relaxation, but rather an integral solution which approximate the optimum within some bounded ratio.

There are two fundamental techniques for designing approximation algorithms using linear programming:

  • The first is to solve the LP-relaxation and convert the fractional optimal solution to an integral solution, with bounded approximation ratio to the optimal integral solution. This technique is called rounding.
  • The second is less obvious but with deeper insight, which uses the idea of duality. This technique is called the primal-dual schema.

We will focus on the rounding technique and demonstrate the power of randomized rounding. For other uses of LP for designing approximation algorithms, please refer to the classic textbook "Approximation Algorithms" by Vazirani.

Randomized Rounding

The integrality gap

Given an LP-relaxation for a minimization problem, let [math]\displaystyle{ \mathrm{OPT}(I) }[/math] be the cost of an optimal solution to instance [math]\displaystyle{ I }[/math], and let [math]\displaystyle{ \mathrm{OPT}_f(I) }[/math] denote the cost of an optimal fractional solution to instance [math]\displaystyle{ I }[/math], i.e., the value of the objective function of an optimal solution to the LP-relaxation. The integrality gap is defined to be

[math]\displaystyle{ \sup_{I}\frac{\mathrm{OPT}(I)}{\mathrm{OPT}_f(I)}. }[/math]

The integrality gap for a maximization problem can be defined similarly.

Max-SAT

We consider the Max-SAT problem:

Problem (Max-SAT)
Given a conjunctive normal form (CNF) formula of [math]\displaystyle{ m }[/math] clauses defined on [math]\displaystyle{ n }[/math] boolean variables [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math], find a truth assignment to the boolean variables that maximizes the number of satisfied clauses.

and obtain two approximation algorithms by two different techniques.

The probabilistic method

A straightforward way to solve Max-SAT is to uniformly and independently assign each variable a random truth assignment. We have learned in the lecture about the probabilistic method that this "algorithm" has fine expected performance. Specifically, a clause [math]\displaystyle{ C_j }[/math] of [math]\displaystyle{ k }[/math] distinct variables is unsatisfied only if every one of the [math]\displaystyle{ k }[/math] variables has the wrong "guess". The probability of this bad event is [math]\displaystyle{ 2^{-k} }[/math], thus the clause [math]\displaystyle{ C_j }[/math] is satisfied with probability [math]\displaystyle{ 1-2^{-k}\ge \frac{1}{2} }[/math]. Due to the linearity of expectation, expectedly at least [math]\displaystyle{ \frac{m}{2}\ge \frac{1}{2}\cdot\mathrm{OPT} }[/math] clauses are satisfied.

We will see that by randomized rounding an LP, we can have a better approximation ratio.

LP relaxation + randomized rounding

For a clause [math]\displaystyle{ C_j }[/math], let [math]\displaystyle{ C_i^+ }[/math] be the set of indices of the variables that appear in the uncomplemented form in clause [math]\displaystyle{ C_j }[/math], and let [math]\displaystyle{ C_i^- }[/math] be the set of indices of the variables that appear in the complemented form in clause [math]\displaystyle{ C_j }[/math]. The Max-SAT problem can be formulated as the following integer linear programing.

[math]\displaystyle{ \begin{align} \mbox{maximize} &\quad \sum_{j=1}^m z_j\\ \mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\ &\qquad\qquad y_i \in\{0,1\}, &&\forall 1\le i\le n \\ &\qquad\qquad z_j \in\{0,1\}, &&\forall 1\le j\le m \end{align} }[/math]

Each [math]\displaystyle{ y_i }[/math] in the programing indicates the truth assignment to the variable [math]\displaystyle{ x_i }[/math], and each [math]\displaystyle{ z_j }[/math] indicates whether the claus [math]\displaystyle{ C_j }[/math] is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.

The integer linear programming is relaxed to the following linear programming:

[math]\displaystyle{ \begin{align} \mbox{maximize} &\quad \sum_{j=1}^m z_j\\ \mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\ &\qquad\qquad 0\le y_i\le 1, &&\forall 1\le i\le n \\ &\qquad\qquad 0\le z_j\le 1, &&\forall 1\le j\le m \end{align} }[/math]

Let [math]\displaystyle{ \hat{y}_i }[/math] and [math]\displaystyle{ \hat{z}_j }[/math] be the fractional optimal solutions to the above linear programming. Clearly, [math]\displaystyle{ \sum_{j=1}^m\hat{z}_j }[/math] is an upper bound on the optimal number of satisfied clauses.

Apply a very natural randomized rounding scheme. For each [math]\displaystyle{ 1\le i\le n }[/math], independently

[math]\displaystyle{ y_i =\begin{cases} 1 & \mbox{with probability }\hat{y}_i.\\ 0 & \mbox{with probability }1-\hat{y}_i. \end{cases} }[/math]

Correspondingly, each [math]\displaystyle{ x_i }[/math] is assigned to TRUE independently with probability [math]\displaystyle{ \hat{y}_i }[/math].

Lemma
Let [math]\displaystyle{ C_j }[/math] be a clause with [math]\displaystyle{ k }[/math] literals. The probability that it is satisfied by randomized rounding is at least
[math]\displaystyle{ (1-(1-1/k)^k)\hat{z}_j }[/math].
Proof.
Without loss of generality, we assume that all [math]\displaystyle{ k }[/math] variables appear in [math]\displaystyle{ C_j }[/math] in the uncomplemented form, and we assume that
[math]\displaystyle{ C_j=x_1\vee x_2\vee\cdots\vee x_k }[/math].

The complemented cases are symmetric.

Clause [math]\displaystyle{ C_j }[/math] remains unsatisfied by randomized rounding only if every one of [math]\displaystyle{ x_i }[/math], [math]\displaystyle{ 1\le i\le k }[/math], is assigned to FALSE, which corresponds to that every one of [math]\displaystyle{ y_i }[/math], [math]\displaystyle{ 1\le i\le k }[/math], is rounded to 0. This event occurs with probability [math]\displaystyle{ \prod_{i=1}^k(1-\hat{y}_i) }[/math]. Therefore, the clause [math]\displaystyle{ C_j }[/math] is satisfied by the randomized rounding with probability

[math]\displaystyle{ 1-\prod_{i=1}^k(1-\hat{y}_i) }[/math].

By the linear programming constraints,

[math]\displaystyle{ \hat{y}_1+\hat{y}_2+\cdots+\hat{y}_k\ge\hat{z}_j }[/math].

Then the value of [math]\displaystyle{ 1-\prod_{i=1}^k(1-\hat{y}_i) }[/math] is minimized when all [math]\displaystyle{ \hat{y}_i }[/math] are equal and [math]\displaystyle{ \hat{y}_i=\hat{z}_j/k }[/math]. Thus, the probability that [math]\displaystyle{ C_j }[/math] is satisfied is

[math]\displaystyle{ 1-\prod_{i=1}^k(1-\hat{y}_i)\ge 1-(1-\hat{z}_j/k)^k\ge (1-(1-1/k)^k)\hat{z}_j }[/math],

where the last inequality is due to the concaveness of the function [math]\displaystyle{ 1-(1-\hat{z}_j/k)^k }[/math] of variable [math]\displaystyle{ \hat{z}_j }[/math].

[math]\displaystyle{ \square }[/math]

For any [math]\displaystyle{ k\ge 1 }[/math], it holds that [math]\displaystyle{ 1-(1-1/k)^k\gt 1-1/e }[/math]. Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least

[math]\displaystyle{ (1-1/e)\sum_{j=1}\hat{z}_j\ge (1-1/e)\cdot\mathrm{OPT} }[/math].

The inequality is due to the fact that [math]\displaystyle{ \hat{z}_j }[/math] are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.

Let the two algorithms compete with each other

For any instance of the Max-SAT, let [math]\displaystyle{ m_1 }[/math] be the expected number of satisfied clauses when each variable is independently set to TRUE with probability [math]\displaystyle{ \frac{1}{2} }[/math]; and let [math]\displaystyle{ m_2 }[/math] be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.

We will show that on any instance of the Max-SAT, one of the two algorithms is a [math]\displaystyle{ \frac{3}{4} }[/math]-approximation algorithm.

Theorem
[math]\displaystyle{ \max\{m_1,m_2\}\ge\frac{3}{4}\cdot\mathrm{OPT}. }[/math]
Proof.
It suffices to show that [math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge\frac{3}{4}\sum_{j=1}^m\hat{z}_j }[/math]. Letting [math]\displaystyle{ S_k }[/math] denote the set of clauses that contain [math]\displaystyle{ k }[/math] literals, we know that
[math]\displaystyle{ m_1=\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\hat{z}_j. }[/math]

By the analysis of randomized rounding,

[math]\displaystyle{ m_2\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-(1-1/k)^k)\hat{z}_j. }[/math]

Thus

[math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge \sum_{k=1}^n\sum_{C_j\in S_k} \frac{1-2^{-k}+1-(1-1/k)^k}{2}\hat{z}_j. }[/math]

An easy calculation shows that [math]\displaystyle{ \frac{1-2^{-k}+1-(1-1/k)^k}{2}\ge\frac{3}{4} }[/math] for any [math]\displaystyle{ k }[/math], so that we have

[math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge \frac{3}{4}\sum_{k=1}^n\sum_{C_j\in S_k}\hat{z}_j=\frac{3}{4}\sum_{j=1}^m\hat{z}_j\ge \frac{3}{4}\cdot\mathrm{OPT}. }[/math]
[math]\displaystyle{ \square }[/math]