随机算法 (Fall 2011)/Max-SAT: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
 
(8 intermediate revisions by the same user not shown)
Line 27: Line 27:
</math>
</math>
Therefore, there exists an assignment such that at least <math>\frac{m}{2}</math> clauses are satisfied.
Therefore, there exists an assignment such that at least <math>\frac{m}{2}</math> clauses are satisfied.
}}
Note that this gives a randomized algorithm which returns a truth assignment satisfying at least <math>\frac{m}{2}</math> clauses in expectation. There are totally <math>m</math> clauses, thus the optimal solution is at most <math>m</math>, which means that this simple randomized algorithm is a <math>\frac{1}{2}</math>-approximation algorithm for the MAX-CUT problem.
= LP Relaxation + Randomized Rounding =
For a clause <math>C_j</math>, let <math>C_i^+</math> be the set of indices of the variables that appear in the uncomplemented form in clause <math>C_j</math>, and let <math>C_i^-</math> be the set of indices of the variables that appear in the complemented form in clause <math>C_j</math>. The Max-SAT problem can be formulated as the following integer linear programing.
:<math>
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&\qquad\qquad y_i \in\{0,1\}, &&\forall 1\le i\le n \\
&\qquad\qquad z_j \in\{0,1\}, &&\forall 1\le j\le m
\end{align}
</math>
Each <math>y_i</math> in the programing indicates the truth assignment to the variable <math>x_i</math>, and each <math>z_j</math> indicates whether the claus <math>C_j</math> is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.
The integer linear programming is relaxed to the following linear programming:
:<math>
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&\qquad\qquad 0\le y_i\le 1, &&\forall 1\le i\le n \\
&\qquad\qquad 0\le z_j\le 1, &&\forall 1\le j\le m
\end{align}
</math>
Let <math>y_i^*</math> and <math>z_j^*</math> be the fractional optimal solutions to the above linear programming. Clearly, <math>\sum_{j=1}^mz_j^*</math> is an upper bound on the optimal number of satisfied clauses, i.e. we have
:<math>\mathrm{OPT}\le\sum_{j=1}^mz_j^*</math>.
Apply a very natural randomized rounding scheme. For each <math>1\le i\le n</math>, independently
:<math>y_i
=\begin{cases}
1 & \mbox{with probability }y_i^*.\\
0 & \mbox{with probability }1-y_i^*.
\end{cases}
</math>
Correspondingly, each <math>x_i</math> is assigned to <tt>TRUE</tt> independently with probability <math>y_i^*</math>.
{{Theorem
|Lemma|
: Let <math>C_j</math> be a clause with <math>k</math> literals. The probability that it is satisfied by randomized rounding is at least
::<math>(1-(1-1/k)^k)z_j^*</math>.
}}
{{Proof| Without loss of generality, we assume that all <math>k</math> variables appear in <math>C_j</math> in the uncomplemented form, and we assume that
:<math>C_j=x_1\vee x_2\vee\cdots\vee x_k</math>.
The complemented cases are symmetric.
Clause <math>C_j</math> remains unsatisfied by randomized rounding only if every one of <math>x_i</math>, <math>1\le i\le k</math>, is assigned to <tt>FALSE</tt>, which corresponds to that every one of <math>y_i</math>, <math>1\le i\le k</math>, is rounded to 0. This event occurs with probability <math>\prod_{i=1}^k(1-y_i^*)</math>. Therefore, the clause <math>C_j</math> is satisfied by the randomized rounding with probability
:<math>1-\prod_{i=1}^k(1-y_i^*)</math>.
By the linear programming constraints,
:<math>y_1^*+y_2^*+\cdots+y_k^*\ge z_j^*</math>.
Then the value of <math>1-\prod_{i=1}^k(1-y_i^*)</math> is minimized when all <math>y_i^*</math> are equal and <math>y_i^*=\frac{z_j^*}{k}</math>. Thus, the probability that <math>C_j</math> is satisfied is
:<math>1-\prod_{i=1}^k(1-y_i^*)\ge 1-(1-z_j^*/k)^k\ge (1-(1-1/k)^k)z_j^*</math>,
where the last inequality is due to the concaveness of the function <math>1-(1-z_j^*/k)^k</math> of variable <math>z_j^*</math>.
}}
For any <math>k\ge 1</math>, it holds that <math>1-(1-1/k)^k>1-1/e</math>. Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least
:<math>(1-1/e)\sum_{j=1}z_j^*\ge (1-1/e)\cdot\mathrm{OPT}</math>.
The inequality is due to the fact that <math>\hat{z}_j</math> are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.
= Choose a better solution =
For any instance of the Max-SAT, let <math>m_1</math> be the expected number of satisfied clauses when each variable is independently set to <tt>TRUE</tt> with probability <math>\frac{1}{2}</math>; and let <math>m_2</math> be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.
We will show that on any instance  of the Max-SAT, one of the two algorithms is a <math>\frac{3}{4}</math>-approximation algorithm.
{{Theorem
|Theorem|
:<math>\max\{m_1,m_2\}\ge\frac{3}{4}\cdot\mathrm{OPT}.</math>
}}
{{Proof|It suffices to show that <math>\frac{(m_1+m_2)}{2}\ge\frac{3}{4}\sum_{j=1}^m z_j^*</math>. Letting <math>S_k</math> denote the set of clauses that contain <math>k</math> literals, we know that
:<math>m_1=\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k}) z_j^*.</math>
By the analysis of randomized rounding,
:<math>m_2\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-(1-1/k)^k) z_j^*.</math>
Thus
:<math>\frac{(m_1+m_2)}{2}\ge \sum_{k=1}^n\sum_{C_j\in S_k}
\frac{1-2^{-k}+1-(1-1/k)^k}{2} z_j^*.</math>
An easy calculation shows that <math>\frac{1-2^{-k}+1-(1-1/k)^k}{2}\ge\frac{3}{4}</math> for any <math>k</math>, so that we have
:<math>\frac{(m_1+m_2)}{2}\ge \frac{3}{4}\sum_{k=1}^n\sum_{C_j\in S_k}z_j^*=\frac{3}{4}\sum_{j=1}^m z_j^*\ge \frac{3}{4}\cdot\mathrm{OPT}.</math>
}}
}}

Latest revision as of 14:02, 6 November 2011

MAX-SAT

Suppose that we have a number of boolean variables [math]\displaystyle{ x_1,x_2,\ldots,\in\{\mathrm{true},\mathrm{false}\} }[/math]. A literal is either a variable [math]\displaystyle{ x_i }[/math] itself or its negation [math]\displaystyle{ \neg x_i }[/math]. A logic expression is a conjunctive normal form (CNF) if it is written as the conjunction(AND) of a set of clauses, where each clause is a disjunction(OR) of literals. For example:

[math]\displaystyle{ (x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3)\wedge (x_1\vee x_2\vee x_4)\wedge (x_4\vee \neg x_3)\wedge (x_4\vee \neg x_1). }[/math]

The satisfiability (SAT) problem is that given as input a CNF formula decide whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. SAT is the first problem known to be NP-complete (the Cook-Levin theorem).

We consider the the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.

Problem (MAX-SAT)
Given a conjunctive normal form (CNF) formula of [math]\displaystyle{ m }[/math] clauses defined on [math]\displaystyle{ n }[/math] boolean variables [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math], find a truth assignment to the boolean variables that maximizes the number of satisfied clauses.

The Probabilistic Method

A straightforward way to solve Max-SAT is to uniformly and independently assign each variable a random truth assignment. The following theorem is proved by the probabilistic method.

Theorem
For any set of [math]\displaystyle{ m }[/math] clauses, there is a truth assignment that satisfies at least [math]\displaystyle{ \frac{m}{2} }[/math] clauses.
Proof.
For each variable, independently assign a random value in [math]\displaystyle{ \{\mathrm{true},\mathrm{false}\} }[/math] with equal probability. For the [math]\displaystyle{ i }[/math]th clause, let [math]\displaystyle{ X_i }[/math] be the random variable which indicates whether the [math]\displaystyle{ i }[/math]th clause is satisfied. Suppose that there are [math]\displaystyle{ k }[/math] literals in the clause. The probability that the clause is satisfied is
[math]\displaystyle{ \Pr[X_k=1]\ge(1-2^{-k})\ge\frac{1}{2} }[/math].

Let [math]\displaystyle{ X=\sum_{i=1}^m X_i }[/math] be the number of satisfied clauses. By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=1}^{m}\mathbf{E}[X_i]\ge \frac{m}{2}. }[/math]

Therefore, there exists an assignment such that at least [math]\displaystyle{ \frac{m}{2} }[/math] clauses are satisfied.

[math]\displaystyle{ \square }[/math]

Note that this gives a randomized algorithm which returns a truth assignment satisfying at least [math]\displaystyle{ \frac{m}{2} }[/math] clauses in expectation. There are totally [math]\displaystyle{ m }[/math] clauses, thus the optimal solution is at most [math]\displaystyle{ m }[/math], which means that this simple randomized algorithm is a [math]\displaystyle{ \frac{1}{2} }[/math]-approximation algorithm for the MAX-CUT problem.

LP Relaxation + Randomized Rounding

For a clause [math]\displaystyle{ C_j }[/math], let [math]\displaystyle{ C_i^+ }[/math] be the set of indices of the variables that appear in the uncomplemented form in clause [math]\displaystyle{ C_j }[/math], and let [math]\displaystyle{ C_i^- }[/math] be the set of indices of the variables that appear in the complemented form in clause [math]\displaystyle{ C_j }[/math]. The Max-SAT problem can be formulated as the following integer linear programing.

[math]\displaystyle{ \begin{align} \mbox{maximize} &\quad \sum_{j=1}^m z_j\\ \mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\ &\qquad\qquad y_i \in\{0,1\}, &&\forall 1\le i\le n \\ &\qquad\qquad z_j \in\{0,1\}, &&\forall 1\le j\le m \end{align} }[/math]

Each [math]\displaystyle{ y_i }[/math] in the programing indicates the truth assignment to the variable [math]\displaystyle{ x_i }[/math], and each [math]\displaystyle{ z_j }[/math] indicates whether the claus [math]\displaystyle{ C_j }[/math] is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.

The integer linear programming is relaxed to the following linear programming:

[math]\displaystyle{ \begin{align} \mbox{maximize} &\quad \sum_{j=1}^m z_j\\ \mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\ &\qquad\qquad 0\le y_i\le 1, &&\forall 1\le i\le n \\ &\qquad\qquad 0\le z_j\le 1, &&\forall 1\le j\le m \end{align} }[/math]

Let [math]\displaystyle{ y_i^* }[/math] and [math]\displaystyle{ z_j^* }[/math] be the fractional optimal solutions to the above linear programming. Clearly, [math]\displaystyle{ \sum_{j=1}^mz_j^* }[/math] is an upper bound on the optimal number of satisfied clauses, i.e. we have

[math]\displaystyle{ \mathrm{OPT}\le\sum_{j=1}^mz_j^* }[/math].

Apply a very natural randomized rounding scheme. For each [math]\displaystyle{ 1\le i\le n }[/math], independently

[math]\displaystyle{ y_i =\begin{cases} 1 & \mbox{with probability }y_i^*.\\ 0 & \mbox{with probability }1-y_i^*. \end{cases} }[/math]

Correspondingly, each [math]\displaystyle{ x_i }[/math] is assigned to TRUE independently with probability [math]\displaystyle{ y_i^* }[/math].

Lemma
Let [math]\displaystyle{ C_j }[/math] be a clause with [math]\displaystyle{ k }[/math] literals. The probability that it is satisfied by randomized rounding is at least
[math]\displaystyle{ (1-(1-1/k)^k)z_j^* }[/math].
Proof.
Without loss of generality, we assume that all [math]\displaystyle{ k }[/math] variables appear in [math]\displaystyle{ C_j }[/math] in the uncomplemented form, and we assume that
[math]\displaystyle{ C_j=x_1\vee x_2\vee\cdots\vee x_k }[/math].

The complemented cases are symmetric.

Clause [math]\displaystyle{ C_j }[/math] remains unsatisfied by randomized rounding only if every one of [math]\displaystyle{ x_i }[/math], [math]\displaystyle{ 1\le i\le k }[/math], is assigned to FALSE, which corresponds to that every one of [math]\displaystyle{ y_i }[/math], [math]\displaystyle{ 1\le i\le k }[/math], is rounded to 0. This event occurs with probability [math]\displaystyle{ \prod_{i=1}^k(1-y_i^*) }[/math]. Therefore, the clause [math]\displaystyle{ C_j }[/math] is satisfied by the randomized rounding with probability

[math]\displaystyle{ 1-\prod_{i=1}^k(1-y_i^*) }[/math].

By the linear programming constraints,

[math]\displaystyle{ y_1^*+y_2^*+\cdots+y_k^*\ge z_j^* }[/math].

Then the value of [math]\displaystyle{ 1-\prod_{i=1}^k(1-y_i^*) }[/math] is minimized when all [math]\displaystyle{ y_i^* }[/math] are equal and [math]\displaystyle{ y_i^*=\frac{z_j^*}{k} }[/math]. Thus, the probability that [math]\displaystyle{ C_j }[/math] is satisfied is

[math]\displaystyle{ 1-\prod_{i=1}^k(1-y_i^*)\ge 1-(1-z_j^*/k)^k\ge (1-(1-1/k)^k)z_j^* }[/math],

where the last inequality is due to the concaveness of the function [math]\displaystyle{ 1-(1-z_j^*/k)^k }[/math] of variable [math]\displaystyle{ z_j^* }[/math].

[math]\displaystyle{ \square }[/math]

For any [math]\displaystyle{ k\ge 1 }[/math], it holds that [math]\displaystyle{ 1-(1-1/k)^k\gt 1-1/e }[/math]. Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least

[math]\displaystyle{ (1-1/e)\sum_{j=1}z_j^*\ge (1-1/e)\cdot\mathrm{OPT} }[/math].

The inequality is due to the fact that [math]\displaystyle{ \hat{z}_j }[/math] are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.

Choose a better solution

For any instance of the Max-SAT, let [math]\displaystyle{ m_1 }[/math] be the expected number of satisfied clauses when each variable is independently set to TRUE with probability [math]\displaystyle{ \frac{1}{2} }[/math]; and let [math]\displaystyle{ m_2 }[/math] be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.

We will show that on any instance of the Max-SAT, one of the two algorithms is a [math]\displaystyle{ \frac{3}{4} }[/math]-approximation algorithm.

Theorem
[math]\displaystyle{ \max\{m_1,m_2\}\ge\frac{3}{4}\cdot\mathrm{OPT}. }[/math]
Proof.
It suffices to show that [math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge\frac{3}{4}\sum_{j=1}^m z_j^* }[/math]. Letting [math]\displaystyle{ S_k }[/math] denote the set of clauses that contain [math]\displaystyle{ k }[/math] literals, we know that
[math]\displaystyle{ m_1=\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k}) z_j^*. }[/math]

By the analysis of randomized rounding,

[math]\displaystyle{ m_2\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-(1-1/k)^k) z_j^*. }[/math]

Thus

[math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge \sum_{k=1}^n\sum_{C_j\in S_k} \frac{1-2^{-k}+1-(1-1/k)^k}{2} z_j^*. }[/math]

An easy calculation shows that [math]\displaystyle{ \frac{1-2^{-k}+1-(1-1/k)^k}{2}\ge\frac{3}{4} }[/math] for any [math]\displaystyle{ k }[/math], so that we have

[math]\displaystyle{ \frac{(m_1+m_2)}{2}\ge \frac{3}{4}\sum_{k=1}^n\sum_{C_j\in S_k}z_j^*=\frac{3}{4}\sum_{j=1}^m z_j^*\ge \frac{3}{4}\cdot\mathrm{OPT}. }[/math]
[math]\displaystyle{ \square }[/math]