随机算法 (Fall 2015)/Randomized rounding and 随机算法 (Fall 2015)/Lovász Local Lemma: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "= MAX-SAT= Suppose that we have a number of boolean variables <math>x_1,x_2,\ldots,\in\{\mathrm{true},\mathrm{false}\}</math>. A '''literal''' is either a variable <math>x_i</...")
 
imported>Etone
 
Line 1: Line 1:
= MAX-SAT=
= Lovász Local Lemma=
Suppose that we have a number of boolean variables <math>x_1,x_2,\ldots,\in\{\mathrm{true},\mathrm{false}\}</math>. A '''literal''' is either a variable <math>x_i</math> itself or its negation <math>\neg x_i</math>. A logic expression is a '''conjunctive normal form (CNF)''' if it is written as the conjunction(AND) of a set of '''clauses''', where each clause is a disjunction(OR) of literals. For example:
Suppose that we are give a set of "bad" events <math>A_1,A_2,\ldots,A_n</math>. We want to know that it is possible that none of them occurs, that is:
:<math>
:<math>
(x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3)\wedge (x_1\vee x_2\vee x_4)\wedge (x_4\vee \neg x_3)\wedge (x_4\vee \neg x_1).
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0.
</math>
</math>
Obviously, a ''necessary'' condition for this is that for none of the bad events its occurrence is certain, i.e. <math>\Pr[A_i]<1</math> for all <math>i</math>. We are interested in the ''sufficient'' condition for the above. There are two easy cases:
;Case 1<nowiki>: mutual independence.</nowiki>
If all the bad events <math>A_1,A_2,\ldots,A_m</math> are mutually independent, then
:<math>
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]=\prod_{i=1}^m(1-\Pr[A_i])
</math>
and hence this probability is positive if <math>\Pr[A_i]<1</math> for all <math>i</math>.
;Case 2<nowiki>: arbitrary dependency.</nowiki>
On the other extreme, if we know nothing about the dependencies between these bad event, the best we can do is to apply the union bound:
:<math>
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\ge 1-\sum_{i=1}^m\Pr\left[A_i\right],
</math>
which is positive if <math>\sum_{i=1}^m\Pr\left[A_i\right]<1</math>. This is a very loose bound, however it cannot be further improved if no further information regarding the dependencies between the events is assumed.
----
In most situations, the dependencies between events are somewhere between these two extremal cases: the events are not independent of each other, but on the other hand the dependencies between them are not total out of control. For these more general cases, we would like to exploit the tradeoff between probabilities of bad events and dependencies between them.
The Lovász local lemma is such a powerful tool for showing the possibility of rare event under ''limited dependencies''. The structure of dependencies between a set of events is described by a '''dependency graph'''.
{{Theorem
|Definition|
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events. A graph <math>D=(V,E)</math> with set of vertices <math>V=\{A_1,A_2,\ldots,A_m\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_m</math> if for each <math>i</math>, the event <math>A_i</math> is mutually independent of all the events in <math>\{A_j\mid (A_i,A_j)\not\in E\}</math>.
}}
The maximum degree <math>d</math> of the dependency graph <math>D</math> is a very useful information, as it tells us that every event <math>A_i</math> among  <math>A_1,A_2,\ldots,A_m</math> is dependent with how many other events at most.
;Remark on the mutual independence
:In probability theory, an event <math>A</math> is said to be independent of events <math>B_1,B_2,\ldots,B_k</math> if for any disjoint <math>I^+,I^-\subseteq\{1,2,\ldots,k\}</math>, we have
:::<math>\Pr\left[A\mid \bigwedge_{i\in I^+}B_i,\bigwedge_{i\in I^-}\overline{B}_i \right]=\Pr[A]</math>,
:that is, occurrences of events among <math>B_1,B_2,\ldots,B_k</math> have no influence on the occurrence of <math>A</math>.
;Example
:Let <math>X_1,X_2,\ldots,X_n</math> be a set of ''mutually independent'' random variables. Each event <math>A_i</math> is a predicate defined on a number of variables among <math>X_1,X_2,\ldots,X_n</math>. Let <math>\mathsf{vbl}(A_i)</math> be the unique smallest set of variables which determine <math>A_i</math>. The dependency graph <math>D=(V,E)</math> is defined as that any two events <math>A_i,A_j</math> are adjacent in <math>D</math> if and only if they share variables, i.e. <math>\mathsf{vbl}(A_i)\cap\mathsf{vbl}(A_j)\neq\emptyset</math>.


The '''satisfiability (SAT)''' problem is that given as input a CNF formula decide whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. SAT is the first problem known to be '''NP-complete''' (the Cook-Levin theorem).  
The following theorem was proved by Erdős and Lovász in 1975 and then later improved by Lovász in 1977. Now it is commonly referred as the '''Lovász local lemma'''. It is a very powerful tool, especially when being used with the probabilistic method, as it supplies a way for dealing with rare events.


We consider the the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.
{{Theorem
{{Theorem
|Problem (MAX-SAT)|
|Lovász Local Lemma (symmetric case)|
:Given a conjunctive normal form (CNF) formula of <math>m</math> clauses defined on <math>n</math> boolean variables <math>x_1,x_2,\ldots,x_n</math>, find a truth assignment to the boolean variables that maximizes the number of satisfied clauses.
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events, and assume that the followings hold:
:#for all <math>1\le i\le m</math>, <math>\Pr[A_i]\le p</math>;
:#every event <math>A_i</math> is mutually independent of all other events except at most <math>d</math> of them, and
:::<math>\mathrm{e}p(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
}}
Here <math>d</math> is the maximum degree of the dependency graph <math>D</math> for the events <math>A_1,\ldots,A_m</math>.
Intuitively, the Lovász Local Lemma says that if a rare (but hopefully possible) event is formulated as to avoid a series of bad events simultaneously, then the rare event is indeed possible if:
* none of these bad events is too probable;
* none of these bad events is dependent with too many other bad events;
And the tradeoff between "too probable" and "too many" is precisely captured by the <math>\mathrm{e}p(d+1)\le 1</math> condition.


==The Probabilistic Method ==
==Non-constructive Poof of LLL==
A straightforward way to solve Max-SAT is to uniformly and independently assign each variable a random truth assignment. The following theorem is proved by the probabilistic method.
We will prove a general version of the local lemma, where the events <math>A_i</math> are not symmetric. This generalization is due to Spencer.
{{Theorem
{{Theorem
|Theorem|
|Lovász Local Lemma (general case)|
:For any set of <math>m</math> clauses, there is a truth assignment that satisfies at least <math>\frac{m}{2}</math> clauses.
:Let <math>D=(V,E)</math> be the dependency graph of events <math>A_1,A_2,\ldots,A_n</math>. Suppose there exist real numbers <math>x_1,x_2,\ldots, x_n</math> such that <math>0\le x_i<1</math> and for all <math>1\le i\le n</math>,
::<math>\Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
}}
}}
{{Proof| For each variable, independently assign a random value in <math>\{\mathrm{true},\mathrm{false}\}</math> with equal probability. For the <math>i</math>th clause, let <math>X_i</math> be the random variable which indicates whether the <math>i</math>th clause is satisfied. Suppose that there are <math>k</math> literals in the clause. The probability that the clause is satisfied is
This generalized version of the local lemma immediately implies the symmetric version of the lemma: namely, <math>\Pr\left[\bigwedge_{i}\overline{A_i}\right]>0</math> if <math>\Pr[A_i]\le p</math> for all <math>A_i</math> and <math>\mathrm{e}p(d+1)\le 1</math> where <math>d</math> is the maximum degree of the dependency graph.
:<math>\Pr[X_k=1]\ge(1-2^{-k})\ge\frac{1}{2}</math>.
To see this, let <math>x_i=\frac{1}{d+1}</math> for all <math>i=1,2,\ldots,n</math>. Note that <math>\left(1-\frac{1}{d+1}\right)^d>\frac{1}{\mathrm{e}}</math>.
 
If the following conditions are satisfied:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#<math>ep(d+1)\le 1</math>;
then for all <math>1\le i\le n</math>,
:<math>\Pr[A_i]\le p\le\frac{1}{e(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
Due to the local lemma for general cases, this implies that
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>.
This proves the symmetric version of local lemma.


Let <math>X=\sum_{i=1}^m X_i</math> be the number of satisfied clauses. By the linearity of expectation,
We then give the proof of the generalized Lovász Local Lemma. The proof is non-constructive and is by induction.
{{Proof|
We can use the following probability identity to compute the probability of the intersection of events:
{{Theorem|Lemma 1|
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]</math>.
}}
{{Proof|
By definition of conditional probability,
:<math>
:<math>
\mathbf{E}[X]=\sum_{i=1}^{m}\mathbf{E}[X_i]\ge \frac{m}{2}.
\Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]
</math>
=\frac{\Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]}
Therefore, there exists an assignment such that at least <math>\frac{m}{2}</math> clauses are satisfied.
{\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]}</math>,
so we have
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]=\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]\Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]</math>.
The lemma is proved by recursively applying this equation.
}}
}}


Note that this gives a randomized algorithm which returns a truth assignment satisfying at least <math>\frac{m}{2}</math> clauses in expectation. There are totally <math>m</math> clauses, thus the optimal solution is at most <math>m</math>, which means that this simple randomized algorithm is a <math>\frac{1}{2}</math>-approximation algorithm for the MAX-CUT problem.
Next we prove by induction on <math>m</math> that for any set of <math>m</math> events <math>i_1,\ldots,i_m</math>,
 
:<math>\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right]\le x_{i_1}</math>.
== LP Relaxation + Randomized Rounding ==
The local lemma is a direct consequence of this by applying Lemma 1.
For a clause <math>C_j</math>, let <math>C_i^+</math> be the set of indices of the variables that appear in the uncomplemented form in clause <math>C_j</math>, and let <math>C_i^-</math> be the set of indices of the variables that appear in the complemented form in clause <math>C_j</math>. The Max-SAT problem can be formulated as the following integer linear programing.


For <math>m=1</math>, this is obvious. For general <math>m</math>, let <math>i_2,\ldots,i_k</math> be the set of vertices adjacent to  <math>i_1</math> in the dependency graph. Clearly <math>k-1\le d</math>. And it holds that
:<math>
\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right]
=\frac{\Pr\left[ A_i\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]}
{\Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]}
</math>,
which is due to the basic conditional probability identity
:<math>\Pr[A\mid BC]=\frac{\Pr[AB\mid C]}{\Pr[B\mid C]}</math>.
We bound the numerator
:<math>
:<math>
\begin{align}
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\Pr\left[ A_{i_1}\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&\le\Pr\left[ A_{i_1}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]\\
&\qquad\qquad y_i \in\{0,1\}, &&\forall 1\le i\le n \\
&=\Pr[A_{i_1}]\\
&\qquad\qquad z_j \in\{0,1\}, &&\forall 1\le j\le m
&\le x_{i_1}\prod_{(i_1,j)\in E}(1-x_j).
\end{align}
\end{align}
</math>
</math>
Each <math>y_i</math> in the programing indicates the truth assignment to the variable <math>x_i</math>, and each <math>z_j</math> indicates whether the claus <math>C_j</math> is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.
The equation is due to the independence between <math>A_{i_1}</math> and <math>A_{i_k+1},\ldots,A_{i_m}</math>.
 
The denominator can be expanded using Lemma 1 as
:<math>
\Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]
=\prod_{j=2}^k\Pr\left[\overline{A_{i_j}}\mid \bigwedge_{\ell=j+1}^m\overline{A_{i_\ell}}\right]
</math>
which by the induction hypothesis, is at least
:<math>
\prod_{j=2}^k(1-x_{i_j})=\prod_{\{i_1,i_j\}\in E}(1-x_j)
</math>
where <math>E</math> is the edge set of the dependency graph.


The integer linear programming is relaxed to the following linear programming:
Therefore,
:<math>
\Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right]
\le\frac{x_{i_1}\prod_{(i_1,j)\in E}(1-x_j)}{\prod_{\{i_1,i_j\}\in E}(1-x_j)}\le x_{i_1}.
</math>
Applying Lemma 1,
:<math>
:<math>
\begin{align}
\begin{align}
\mbox{maximize} &\quad \sum_{j=1}^m z_j\\
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]
\mbox{subject to} &\quad \sum_{i\in C_j^+}y_i+\sum_{i\in C_j^-}(1-y_i) \ge z_j, &&\forall 1\le j\le m\\
&=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\\
&\qquad\qquad 0\le y_i\le 1, &&\forall 1\le i\le n \\
&=\prod_{i=1}^n\left(1-\Pr\left[A_i\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\right)\\
&\qquad\qquad 0\le z_j\le 1, &&\forall 1\le j\le m
&\ge\prod_{i=1}^n\left(1-x_i\right).
\end{align}
\end{align}
</math>
</math>
}}


Let <math>y_i^*</math> and <math>z_j^*</math> be the fractional optimal solutions to the above linear programming. Clearly, <math>\sum_{j=1}^mz_j^*</math> is an upper bound on the optimal number of satisfied clauses, i.e. we have
= Algorithmic Lovász Local Lemma =
:<math>\mathrm{OPT}\le\sum_{j=1}^mz_j^*</math>.
We consider a restrictive case.


Apply a very natural randomized rounding scheme. For each <math>1\le i\le n</math>, independently
Let <math>X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\}</math> be a set of ''mutually independent'' random variables which assume boolean values. Each event <math>A_i</math> is an AND of at most <math>k</math> literals (<math>X_i</math> or <math>\neg X_i</math>). Let <math>v(A_i)</math> be the set of the <math>k</math> variables that <math>A_i</math> depends on. The probability that none of the bad events occurs is
:<math>y_i
:<math>
=\begin{cases}
\Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right].
1 & \mbox{with probability }y_i^*.\\
</math>
0 & \mbox{with probability }1-y_i^*.
In this particular model, the dependency graph <math>D=(V,E)</math> is defined as that <math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.
\end{cases}
 
Observe that <math>\overline{A_i}</math> is a clause (OR of literals). Thus, <math>\bigwedge_{i=1}^n \overline{A_i}</math> is a '''<math>k</math>-CNF''', the CNF that each clause depends on <math>k</math> variables.
The probability
:<math>
\bigwedge_{i=1}^n \overline{A_i}>0
</math>
</math>
Correspondingly, each <math>x_i</math> is assigned to <tt>TRUE</tt> independently with probability <math>y_i^*</math>.  
means that the the <math>k</math>-CNF <math>\bigwedge_{i=1}^n \overline{A_i}</math> is satisfiable.
 
The satisfiability of <math>k</math>-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first '''NP-complete''' problem (the Cook-Levin theorem). Given the current suspect on '''NP''' vs '''P''', we do not expect to solve this problem generally.
 
However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most <math>d</math> other clauses. We call a <math>k</math>-CNF with this property a <math>k</math>-CNF with bounded degree <math>d</math>.
 
Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:
;Problem
:Find a condition on <math>k</math> and <math>d</math>, such that any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable.
 
In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.
 
Let <math>\phi</math> be a <math>k</math>-CNF of <math>n</math> clauses with bounded degree <math>d</math>,  defined on variables <math>X_1,\ldots,X_m</math>. The following procedure find a satisfiable assignment for <math>\phi</math>.
 
{{Theorem
|Solve(<math>\phi</math>)|
:Pick a random assignment of <math>X_1,\ldots,X_m</math>.
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
:: '''Fix'''(<math>C</math>).
}}


The sub-routine '''Fix''' is defined as follows:
{{Theorem
{{Theorem
|Lemma|
|Fix(<math>C</math>)|
: Let <math>C_j</math> be a clause with <math>k</math> literals. The probability that it is satisfied by randomized rounding is at least
:Replace the variables in <math>v(C)</math> with new random values.
::<math>(1-(1-1/k)^k)z_j^*</math>.
:While there is unsatisfied clause <math>D</math> that <math>v(C)\cap v(D)\neq \emptyset</math>
:: '''Fix'''(<math>D</math>).
}}
}}
{{Proof| Without loss of generality, we assume that all <math>k</math> variables appear in <math>C_j</math> in the uncomplemented form, and we assume that
:<math>C_j=x_1\vee x_2\vee\cdots\vee x_k</math>.
The complemented cases are symmetric.


Clause <math>C_j</math> remains unsatisfied by randomized rounding only if every one of <math>x_i</math>, <math>1\le i\le k</math>, is assigned to <tt>FALSE</tt>, which corresponds to that every one of <math>y_i</math>, <math>1\le i\le k</math>, is rounded to 0. This event occurs with probability <math>\prod_{i=1}^k(1-y_i^*)</math>. Therefore, the clause <math>C_j</math> is satisfied by the randomized rounding with probability
The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.
:<math>1-\prod_{i=1}^k(1-y_i^*)</math>.
 
We then prove it works.
 
===Number of top-level callings of Fix ===
In '''Solve'''(<math>\phi</math>), the subroutine '''Fix'''(<math>C</math>) is called. We now upper bound the number of times it is called (not including the recursive calls).
 
Assume '''Fix'''(<math>C</math>) always terminates.
:;Observation
::Every clause that was satisfied before '''Fix'''(<math>C</math>) was called will still remain satisfied and <math>C</math> will also be satisfied after '''Fix'''(<math>C</math>) returns.
 
The observation can be proved by induction on the structure of recursion.  Since there are <math>n</math> clauses, '''Solve'''(<math>\phi</math>) makes at most <math>n</math> calls to '''Fix'''.
 
We then prove that '''Fix'''(<math>C</math>) terminates.
 
=== Termination of Fix ===
The idea of the proof is to '''reconstruct''' a random string.
 
Suppose that during the running of '''Solve'''(<math>\phi</math>), the '''Fix''' subroutine is called for <math>t</math> times (including all the recursive calls).


By the linear programming constraints,
Let <math>s</math> be the sequence of the random bits used by '''Solve'''(<math>\phi</math>). It is easy to see that the length of <math>s</math> is <math>|s|=m+tk</math>, because the initial random assignment of <math>m</math> variables takes <math>m</math> bits, and each time of calling '''Fix''' takes <math>k</math> bits.
:<math>y_1^*+y_2^*+\cdots+y_k^*\ge z_j^*</math>.


Then the value of <math>1-\prod_{i=1}^k(1-y_i^*)</math> is minimized when all <math>y_i^*</math> are equal and <math>y_i^*=\frac{z_j^*}{k}</math>. Thus, the probability that <math>C_j</math> is satisfied is
We then reconstruct <math>s</math> in an alternative way.
:<math>1-\prod_{i=1}^k(1-y_i^*)\ge 1-(1-z_j^*/k)^k\ge (1-(1-1/k)^k)z_j^*</math>,
 
where the last inequality is due to the concaveness of the function <math>1-(1-z_j^*/k)^k</math> of variable <math>z_j^*</math>.
Recall that '''Solve'''(<math>\phi</math>) calls '''Fix'''(<math>C</math>) at top-level for at most <math>n</math> times. Each calling of '''Fix'''(<math>C</math>) defines a recursion tree, rooted at clause <math>C</math>, and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of '''Solve'''(<math>\phi</math>) can be described by at most <math>n</math> recursion trees.
 
:;Observation 1
::Fix a <math>\phi</math>. The <math>n</math> recursion trees which capture the total running history of '''Solve'''(<math>\phi</math>) can be encoded in <math>n\log n+t(\log d+O(1))</math> bits.
Each root node corresponds to a clause. There are <math>n</math> clauses in <math>\phi</math>. The <math>n</math> root nodes can be represented in <math>n\log n</math> bits.
 
The smart part is how to encode the branches of the tree. Note that '''Fix'''(<math>C</math>) will call '''Fix'''(<math>D</math>) only for the <math>D</math> that shares variables with <math>C</math>. For a k-CNF with bounded degree <math>d</math>, each clause <math>C</math> can share variables with at most <math>d</math> many other clauses. Thus, each branch in the recursion tree can be represented  in <math>\log d</math> bits. There are extra <math>O(1)</math> bits needed to denote whether the recursion ends. So totally  <math>n\log n+t(\log d+O(1))</math> bits are sufficient to encode all <math>n</math> recursion trees.
 
:;Observation 2
::The random sequence <math>s</math> can be encoded in <math>m+n\log n+t(\log d+O(1))</math> bits.
 
With <math>n\log n+t(\log d+O(1))</math> bits, the structure of all the recursion trees can be encoded. With extra <math>m</math> bits, the final assignment of the <math>m</math>
variables is stored.
 
We then observe that with these information, the sequence of the random bits <math>s</math> can be reconstructed from backwards from the final assignment.
 
The key step is that a clause <math>C</math> is only fixed when it is unsatisfied (obvious), and an unsatisfied clause <math>C</math> must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the <math>k</math> random bits in the random sequence <math>s</math> used in the call of the Fix corresponding to the node. Therefore, <math>s</math> can be reconstructed from the final assignment plus at most <math>n</math> recursion trees, which can be encoded in at most <math>m+n\log n+t(\log d+O(1))</math> bits.
 
The following theorem lies in the heart of the '''Kolmogorov complexity'''. The theorem states that random sequence is '''incompressible'''.
{{Theorem
|Theorem (Kolmogorov)|
:For any encoding scheme , with high probability, a random sequence <math>s</math> is encoded in at least <math>|s|</math> bits.
}}
}}


For any <math>k\ge 1</math>, it holds that <math>1-(1-1/k)^k>1-1/e</math>. Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least
Applying the theorem, we have that with high probability,
:<math>(1-1/e)\sum_{j=1}z_j^*\ge (1-1/e)\cdot\mathrm{OPT}</math>.
:<math>m+n\log n+t(\log d+O(1))\ge |s|=m+tk</math>.
The inequality is due to the fact that <math>\hat{z}_j</math> are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.
Therefore,
:<math>
t(k-O(1)-\log d)\le n\log n.
</math>
In order to bound <math>t</math>, we need
:<math>k-O(1)-\log d>0</math>,
which hold for <math>d< 2^{k-\alpha}</math> for some constant <math>\alpha>0</math>. In fact, in this case, <math>t=O(n\log n)</math>, the running time of the procedure is bounded by a polynomial!


== Choose a better solution ==
=== Back to the local lemma ===
For any instance of the Max-SAT, let <math>m_1</math> be the expected number of satisfied clauses when each variable is independently set to <tt>TRUE</tt> with probability <math>\frac{1}{2}</math>; and let <math>m_2</math> be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.
We showed that for <math>d<2^{k-O(1)}</math>, any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.


We will show that on any instance  of the Max-SAT, one of the two algorithms is a <math>\frac{3}{4}</math>-approximation algorithm.
Recall that the symmetric version of the local lemma:
{{Theorem
{{Theorem
|Theorem|
|Theorem (The local lemma: symmetric case)|
:<math>\max\{m_1,m_2\}\ge\frac{3}{4}\cdot\mathrm{OPT}.</math>
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
}}
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
{{Proof|It suffices to show that <math>\frac{(m_1+m_2)}{2}\ge\frac{3}{4}\sum_{j=1}^m z_j^*</math>. Letting <math>S_k</math> denote the set of clauses that contain <math>k</math> literals, we know that
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and
:<math>m_1=\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k})\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-2^{-k}) z_j^*.</math>
:::<math>ep(d+1)\le 1</math>.
By the analysis of randomized rounding,
:Then
:<math>m_2\ge\sum_{k=1}^n\sum_{C_j\in S_k}(1-(1-1/k)^k) z_j^*.</math>
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
Thus
:<math>\frac{(m_1+m_2)}{2}\ge \sum_{k=1}^n\sum_{C_j\in S_k}
\frac{1-2^{-k}+1-(1-1/k)^k}{2} z_j^*.</math>
An easy calculation shows that <math>\frac{1-2^{-k}+1-(1-1/k)^k}{2}\ge\frac{3}{4}</math> for any <math>k</math>, so that we have
:<math>\frac{(m_1+m_2)}{2}\ge \frac{3}{4}\sum_{k=1}^n\sum_{C_j\in S_k}z_j^*=\frac{3}{4}\sum_{j=1}^m z_j^*\ge \frac{3}{4}\cdot\mathrm{OPT}.</math>
}}
}}
Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens <math>\overline{A_i}</math> are clauses defined on <math>k</math> variables. Then,
:<math>
p=2^{-k}
</math>
thus, the condition <math>ep(d+1)\le 1</math> means that
:<math>
d<2^{k}/e
</math>
which means the Moser's procedure is asymptotically optimal on the degree of dependency.

Revision as of 09:16, 28 November 2015

Lovász Local Lemma

Suppose that we are give a set of "bad" events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math]. We want to know that it is possible that none of them occurs, that is:

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0. }[/math]

Obviously, a necessary condition for this is that for none of the bad events its occurrence is certain, i.e. [math]\displaystyle{ \Pr[A_i]\lt 1 }[/math] for all [math]\displaystyle{ i }[/math]. We are interested in the sufficient condition for the above. There are two easy cases:

Case 1: mutual independence.

If all the bad events [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] are mutually independent, then

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]=\prod_{i=1}^m(1-\Pr[A_i]) }[/math]

and hence this probability is positive if [math]\displaystyle{ \Pr[A_i]\lt 1 }[/math] for all [math]\displaystyle{ i }[/math].

Case 2: arbitrary dependency.

On the other extreme, if we know nothing about the dependencies between these bad event, the best we can do is to apply the union bound:

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\ge 1-\sum_{i=1}^m\Pr\left[A_i\right], }[/math]

which is positive if [math]\displaystyle{ \sum_{i=1}^m\Pr\left[A_i\right]\lt 1 }[/math]. This is a very loose bound, however it cannot be further improved if no further information regarding the dependencies between the events is assumed.


In most situations, the dependencies between events are somewhere between these two extremal cases: the events are not independent of each other, but on the other hand the dependencies between them are not total out of control. For these more general cases, we would like to exploit the tradeoff between probabilities of bad events and dependencies between them.

The Lovász local lemma is such a powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a dependency graph.

Definition
Let [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] be a set of events. A graph [math]\displaystyle{ D=(V,E) }[/math] with set of vertices [math]\displaystyle{ V=\{A_1,A_2,\ldots,A_m\} }[/math] is called a dependency graph for the events [math]\displaystyle{ A_1,\ldots,A_m }[/math] if for each [math]\displaystyle{ i }[/math], the event [math]\displaystyle{ A_i }[/math] is mutually independent of all the events in [math]\displaystyle{ \{A_j\mid (A_i,A_j)\not\in E\} }[/math].

The maximum degree [math]\displaystyle{ d }[/math] of the dependency graph [math]\displaystyle{ D }[/math] is a very useful information, as it tells us that every event [math]\displaystyle{ A_i }[/math] among [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] is dependent with how many other events at most.

Remark on the mutual independence
In probability theory, an event [math]\displaystyle{ A }[/math] is said to be independent of events [math]\displaystyle{ B_1,B_2,\ldots,B_k }[/math] if for any disjoint [math]\displaystyle{ I^+,I^-\subseteq\{1,2,\ldots,k\} }[/math], we have
[math]\displaystyle{ \Pr\left[A\mid \bigwedge_{i\in I^+}B_i,\bigwedge_{i\in I^-}\overline{B}_i \right]=\Pr[A] }[/math],
that is, occurrences of events among [math]\displaystyle{ B_1,B_2,\ldots,B_k }[/math] have no influence on the occurrence of [math]\displaystyle{ A }[/math].
Example
Let [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math] be a set of mutually independent random variables. Each event [math]\displaystyle{ A_i }[/math] is a predicate defined on a number of variables among [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math]. Let [math]\displaystyle{ \mathsf{vbl}(A_i) }[/math] be the unique smallest set of variables which determine [math]\displaystyle{ A_i }[/math]. The dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined as that any two events [math]\displaystyle{ A_i,A_j }[/math] are adjacent in [math]\displaystyle{ D }[/math] if and only if they share variables, i.e. [math]\displaystyle{ \mathsf{vbl}(A_i)\cap\mathsf{vbl}(A_j)\neq\emptyset }[/math].

The following theorem was proved by Erdős and Lovász in 1975 and then later improved by Lovász in 1977. Now it is commonly referred as the Lovász local lemma. It is a very powerful tool, especially when being used with the probabilistic method, as it supplies a way for dealing with rare events.

Lovász Local Lemma (symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] be a set of events, and assume that the followings hold:
  1. for all [math]\displaystyle{ 1\le i\le m }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. every event [math]\displaystyle{ A_i }[/math] is mutually independent of all other events except at most [math]\displaystyle{ d }[/math] of them, and
[math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

Here [math]\displaystyle{ d }[/math] is the maximum degree of the dependency graph [math]\displaystyle{ D }[/math] for the events [math]\displaystyle{ A_1,\ldots,A_m }[/math].

Intuitively, the Lovász Local Lemma says that if a rare (but hopefully possible) event is formulated as to avoid a series of bad events simultaneously, then the rare event is indeed possible if:

  • none of these bad events is too probable;
  • none of these bad events is dependent with too many other bad events;

And the tradeoff between "too probable" and "too many" is precisely captured by the [math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math] condition.

Non-constructive Poof of LLL

We will prove a general version of the local lemma, where the events [math]\displaystyle{ A_i }[/math] are not symmetric. This generalization is due to Spencer.

Lovász Local Lemma (general case)
Let [math]\displaystyle{ D=(V,E) }[/math] be the dependency graph of events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math]. Suppose there exist real numbers [math]\displaystyle{ x_1,x_2,\ldots, x_n }[/math] such that [math]\displaystyle{ 0\le x_i\lt 1 }[/math] and for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ \Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j) }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i) }[/math].

This generalized version of the local lemma immediately implies the symmetric version of the lemma: namely, [math]\displaystyle{ \Pr\left[\bigwedge_{i}\overline{A_i}\right]\gt 0 }[/math] if [math]\displaystyle{ \Pr[A_i]\le p }[/math] for all [math]\displaystyle{ A_i }[/math] and [math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math] where [math]\displaystyle{ d }[/math] is the maximum degree of the dependency graph. To see this, let [math]\displaystyle{ x_i=\frac{1}{d+1} }[/math] for all [math]\displaystyle{ i=1,2,\ldots,n }[/math]. Note that [math]\displaystyle{ \left(1-\frac{1}{d+1}\right)^d\gt \frac{1}{\mathrm{e}} }[/math].

If the following conditions are satisfied:

  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. [math]\displaystyle{ ep(d+1)\le 1 }[/math];

then for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \Pr[A_i]\le p\le\frac{1}{e(d+1)}\lt \frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j) }[/math].

Due to the local lemma for general cases, this implies that

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n\gt 0 }[/math].

This proves the symmetric version of local lemma.

We then give the proof of the generalized Lovász Local Lemma. The proof is non-constructive and is by induction.

Proof.

We can use the following probability identity to compute the probability of the intersection of events:

Lemma 1
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right] }[/math].
Proof.

By definition of conditional probability,

[math]\displaystyle{ \Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right] =\frac{\Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]} {\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]} }[/math],

so we have

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]=\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]\Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right] }[/math].

The lemma is proved by recursively applying this equation.

[math]\displaystyle{ \square }[/math]

Next we prove by induction on [math]\displaystyle{ m }[/math] that for any set of [math]\displaystyle{ m }[/math] events [math]\displaystyle{ i_1,\ldots,i_m }[/math],

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right]\le x_{i_1} }[/math].

The local lemma is a direct consequence of this by applying Lemma 1.

For [math]\displaystyle{ m=1 }[/math], this is obvious. For general [math]\displaystyle{ m }[/math], let [math]\displaystyle{ i_2,\ldots,i_k }[/math] be the set of vertices adjacent to [math]\displaystyle{ i_1 }[/math] in the dependency graph. Clearly [math]\displaystyle{ k-1\le d }[/math]. And it holds that

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right] =\frac{\Pr\left[ A_i\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]} {\Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]} }[/math],

which is due to the basic conditional probability identity

[math]\displaystyle{ \Pr[A\mid BC]=\frac{\Pr[AB\mid C]}{\Pr[B\mid C]} }[/math].

We bound the numerator

[math]\displaystyle{ \begin{align} \Pr\left[ A_{i_1}\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right] &\le\Pr\left[ A_{i_1}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]\\ &=\Pr[A_{i_1}]\\ &\le x_{i_1}\prod_{(i_1,j)\in E}(1-x_j). \end{align} }[/math]

The equation is due to the independence between [math]\displaystyle{ A_{i_1} }[/math] and [math]\displaystyle{ A_{i_k+1},\ldots,A_{i_m} }[/math].

The denominator can be expanded using Lemma 1 as

[math]\displaystyle{ \Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right] =\prod_{j=2}^k\Pr\left[\overline{A_{i_j}}\mid \bigwedge_{\ell=j+1}^m\overline{A_{i_\ell}}\right] }[/math]

which by the induction hypothesis, is at least

[math]\displaystyle{ \prod_{j=2}^k(1-x_{i_j})=\prod_{\{i_1,i_j\}\in E}(1-x_j) }[/math]

where [math]\displaystyle{ E }[/math] is the edge set of the dependency graph.

Therefore,

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right] \le\frac{x_{i_1}\prod_{(i_1,j)\in E}(1-x_j)}{\prod_{\{i_1,i_j\}\in E}(1-x_j)}\le x_{i_1}. }[/math]

Applying Lemma 1,

[math]\displaystyle{ \begin{align} \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right] &=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\\ &=\prod_{i=1}^n\left(1-\Pr\left[A_i\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\right)\\ &\ge\prod_{i=1}^n\left(1-x_i\right). \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

Algorithmic Lovász Local Lemma

We consider a restrictive case.

Let [math]\displaystyle{ X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\} }[/math] be a set of mutually independent random variables which assume boolean values. Each event [math]\displaystyle{ A_i }[/math] is an AND of at most [math]\displaystyle{ k }[/math] literals ([math]\displaystyle{ X_i }[/math] or [math]\displaystyle{ \neg X_i }[/math]). Let [math]\displaystyle{ v(A_i) }[/math] be the set of the [math]\displaystyle{ k }[/math] variables that [math]\displaystyle{ A_i }[/math] depends on. The probability that none of the bad events occurs is

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right]. }[/math]

In this particular model, the dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined as that [math]\displaystyle{ (i,j)\in E }[/math] iff [math]\displaystyle{ v(A_i)\cap v(A_j)\neq \emptyset }[/math].

Observe that [math]\displaystyle{ \overline{A_i} }[/math] is a clause (OR of literals). Thus, [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is a [math]\displaystyle{ k }[/math]-CNF, the CNF that each clause depends on [math]\displaystyle{ k }[/math] variables. The probability

[math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i}\gt 0 }[/math]

means that the the [math]\displaystyle{ k }[/math]-CNF [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is satisfiable.

The satisfiability of [math]\displaystyle{ k }[/math]-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first NP-complete problem (the Cook-Levin theorem). Given the current suspect on NP vs P, we do not expect to solve this problem generally.

However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most [math]\displaystyle{ d }[/math] other clauses. We call a [math]\displaystyle{ k }[/math]-CNF with this property a [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math].

Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:

Problem
Find a condition on [math]\displaystyle{ k }[/math] and [math]\displaystyle{ d }[/math], such that any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable.

In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.

Let [math]\displaystyle{ \phi }[/math] be a [math]\displaystyle{ k }[/math]-CNF of [math]\displaystyle{ n }[/math] clauses with bounded degree [math]\displaystyle{ d }[/math], defined on variables [math]\displaystyle{ X_1,\ldots,X_m }[/math]. The following procedure find a satisfiable assignment for [math]\displaystyle{ \phi }[/math].

Solve([math]\displaystyle{ \phi }[/math])
Pick a random assignment of [math]\displaystyle{ X_1,\ldots,X_m }[/math].
While there is an unsatisfied clause [math]\displaystyle{ C }[/math] in [math]\displaystyle{ \phi }[/math]
Fix([math]\displaystyle{ C }[/math]).

The sub-routine Fix is defined as follows:

Fix([math]\displaystyle{ C }[/math])
Replace the variables in [math]\displaystyle{ v(C) }[/math] with new random values.
While there is unsatisfied clause [math]\displaystyle{ D }[/math] that [math]\displaystyle{ v(C)\cap v(D)\neq \emptyset }[/math]
Fix([math]\displaystyle{ D }[/math]).

The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.

We then prove it works.

Number of top-level callings of Fix

In Solve([math]\displaystyle{ \phi }[/math]), the subroutine Fix([math]\displaystyle{ C }[/math]) is called. We now upper bound the number of times it is called (not including the recursive calls).

Assume Fix([math]\displaystyle{ C }[/math]) always terminates.

Observation
Every clause that was satisfied before Fix([math]\displaystyle{ C }[/math]) was called will still remain satisfied and [math]\displaystyle{ C }[/math] will also be satisfied after Fix([math]\displaystyle{ C }[/math]) returns.

The observation can be proved by induction on the structure of recursion. Since there are [math]\displaystyle{ n }[/math] clauses, Solve([math]\displaystyle{ \phi }[/math]) makes at most [math]\displaystyle{ n }[/math] calls to Fix.

We then prove that Fix([math]\displaystyle{ C }[/math]) terminates.

Termination of Fix

The idea of the proof is to reconstruct a random string.

Suppose that during the running of Solve([math]\displaystyle{ \phi }[/math]), the Fix subroutine is called for [math]\displaystyle{ t }[/math] times (including all the recursive calls).

Let [math]\displaystyle{ s }[/math] be the sequence of the random bits used by Solve([math]\displaystyle{ \phi }[/math]). It is easy to see that the length of [math]\displaystyle{ s }[/math] is [math]\displaystyle{ |s|=m+tk }[/math], because the initial random assignment of [math]\displaystyle{ m }[/math] variables takes [math]\displaystyle{ m }[/math] bits, and each time of calling Fix takes [math]\displaystyle{ k }[/math] bits.

We then reconstruct [math]\displaystyle{ s }[/math] in an alternative way.

Recall that Solve([math]\displaystyle{ \phi }[/math]) calls Fix([math]\displaystyle{ C }[/math]) at top-level for at most [math]\displaystyle{ n }[/math] times. Each calling of Fix([math]\displaystyle{ C }[/math]) defines a recursion tree, rooted at clause [math]\displaystyle{ C }[/math], and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of Solve([math]\displaystyle{ \phi }[/math]) can be described by at most [math]\displaystyle{ n }[/math] recursion trees.

Observation 1
Fix a [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] recursion trees which capture the total running history of Solve([math]\displaystyle{ \phi }[/math]) can be encoded in [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits.

Each root node corresponds to a clause. There are [math]\displaystyle{ n }[/math] clauses in [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] root nodes can be represented in [math]\displaystyle{ n\log n }[/math] bits.

The smart part is how to encode the branches of the tree. Note that Fix([math]\displaystyle{ C }[/math]) will call Fix([math]\displaystyle{ D }[/math]) only for the [math]\displaystyle{ D }[/math] that shares variables with [math]\displaystyle{ C }[/math]. For a k-CNF with bounded degree [math]\displaystyle{ d }[/math], each clause [math]\displaystyle{ C }[/math] can share variables with at most [math]\displaystyle{ d }[/math] many other clauses. Thus, each branch in the recursion tree can be represented in [math]\displaystyle{ \log d }[/math] bits. There are extra [math]\displaystyle{ O(1) }[/math] bits needed to denote whether the recursion ends. So totally [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits are sufficient to encode all [math]\displaystyle{ n }[/math] recursion trees.

Observation 2
The random sequence [math]\displaystyle{ s }[/math] can be encoded in [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

With [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits, the structure of all the recursion trees can be encoded. With extra [math]\displaystyle{ m }[/math] bits, the final assignment of the [math]\displaystyle{ m }[/math] variables is stored.

We then observe that with these information, the sequence of the random bits [math]\displaystyle{ s }[/math] can be reconstructed from backwards from the final assignment.

The key step is that a clause [math]\displaystyle{ C }[/math] is only fixed when it is unsatisfied (obvious), and an unsatisfied clause [math]\displaystyle{ C }[/math] must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the [math]\displaystyle{ k }[/math] random bits in the random sequence [math]\displaystyle{ s }[/math] used in the call of the Fix corresponding to the node. Therefore, [math]\displaystyle{ s }[/math] can be reconstructed from the final assignment plus at most [math]\displaystyle{ n }[/math] recursion trees, which can be encoded in at most [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

The following theorem lies in the heart of the Kolmogorov complexity. The theorem states that random sequence is incompressible.

Theorem (Kolmogorov)
For any encoding scheme , with high probability, a random sequence [math]\displaystyle{ s }[/math] is encoded in at least [math]\displaystyle{ |s| }[/math] bits.

Applying the theorem, we have that with high probability,

[math]\displaystyle{ m+n\log n+t(\log d+O(1))\ge |s|=m+tk }[/math].

Therefore,

[math]\displaystyle{ t(k-O(1)-\log d)\le n\log n. }[/math]

In order to bound [math]\displaystyle{ t }[/math], we need

[math]\displaystyle{ k-O(1)-\log d\gt 0 }[/math],

which hold for [math]\displaystyle{ d\lt 2^{k-\alpha} }[/math] for some constant [math]\displaystyle{ \alpha\gt 0 }[/math]. In fact, in this case, [math]\displaystyle{ t=O(n\log n) }[/math], the running time of the procedure is bounded by a polynomial!

Back to the local lemma

We showed that for [math]\displaystyle{ d\lt 2^{k-O(1)} }[/math], any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.

Recall that the symmetric version of the local lemma:

Theorem (The local lemma: symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that the following hold:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ ep(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens [math]\displaystyle{ \overline{A_i} }[/math] are clauses defined on [math]\displaystyle{ k }[/math] variables. Then,

[math]\displaystyle{ p=2^{-k} }[/math]

thus, the condition [math]\displaystyle{ ep(d+1)\le 1 }[/math] means that

[math]\displaystyle{ d\lt 2^{k}/e }[/math]

which means the Moser's procedure is asymptotically optimal on the degree of dependency.