高级算法 (Fall 2016)/''Lovász'' Local Lemma: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
 
(34 intermediate revisions by the same user not shown)
Line 5: Line 5:


==The dependency graph ==
==The dependency graph ==
The notion of mutual independence between an event and a set of events is formally defined as follows.
{{Theorem|Definition (mutual independence)|
:An event <math>A</math> is said to be '''mutually independent''' of events <math>B_1,B_2,\ldots, B_k</math>, if for any disjoint <math>I^+,I^-\subseteq\{1,2,\ldots,k\}</math>, it holds that
::<math>\Pr\left[A \mid \left(\bigwedge_{i\in I^+}B_i\right) \wedge \left(\bigwedge_{i\in I^-}\overline{B_i}\right)\right]=\Pr[A]</math>.
}}
Given a sequence of events <math>A_1,A_2,\ldots,A_n</math>, we use the '''dependency graph''' to describe the dependencies between these events.
Given a sequence of events <math>A_1,A_2,\ldots,A_n</math>, we use the '''dependency graph''' to describe the dependencies between these events.


Line 10: Line 16:
|Definition (dependency graph)|
|Definition (dependency graph)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a sequence of events. A graph <math>D=(V,E)</math> on the set of vertices <math>V=\{1,2,\ldots,n\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_n</math> if for each <math>i</math>, <math>1\le i\le n</math>, the event <math>A_i</math> is mutually independent of all the events <math>\{A_j\mid (i,j)\not\in E\}</math>.
:Let <math>A_1,A_2,\ldots,A_n</math> be a sequence of events. A graph <math>D=(V,E)</math> on the set of vertices <math>V=\{1,2,\ldots,n\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_n</math> if for each <math>i</math>, <math>1\le i\le n</math>, the event <math>A_i</math> is mutually independent of all the events <math>\{A_j\mid (i,j)\not\in E\}</math>.
}}
:Furthermore, for each event <math>A_i</math>:
 
:* define <math>\Gamma(A_i)=\{A_j\mid (i,j)\in E\}</math> as the '''neighborhood''' of event <math>A_i</math> in the dependency graph;
The notion of mutual independence between an event and a set of events is formally defined as follows.
:* define <math>\Gamma^+(A_i)=\Gamma(A_i)\cup\{A_i\}</math> as the '''inclusive neighborhood''' of <math>A_i</math>, i.e. the set of events adjacent to <math>A_i</math> in the dependency graph, including <math>A_i</math> itself.
{{Theorem|Definition (mutual independence)|
:An event <math>A</math> is said to be '''mutually independent''' of events <math>B_1,B_2,\ldots, B_k</math>, if for any disjoint <math>I^+,I^-\subseteq\{1,2,\ldots,k\}</math>, it holds that
::<math>\Pr\left[A \mid \left(\bigwedge_{i\in I^+}B_i\right) \wedge \left(\bigwedge_{i\in I^-}\overline{B_i}\right)\right]=\Pr[A]</math>.
}}
}}


Line 38: Line 41:
{{Theorem
{{Theorem
|Lovász Local Lemma (general case)|
|Lovász Local Lemma (general case)|
:Let <math>D=(V,E)</math> be the dependency graph of events <math>A_1,A_2,\ldots,A_n</math>. Suppose there exist real numbers <math>x_1,x_2,\ldots, x_n</math> such that <math>0\le x_i<1</math> and for all <math>1\le i\le n</math>,
:Let <math>A_1,A_2,\ldots,A_n</math> be a sequence of events. Suppose there exist real numbers <math>x_1,x_2,\ldots, x_n</math> such that <math>0\le x_i<1</math> and for all <math>1\le i\le n</math>,
::<math>\Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
::<math>\Pr[A_i]\le x_i\prod_{A_j\in \Gamma(A_i)}(1-x_j)</math>.
:Then  
:Then  
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
Line 50: Line 53:
:#<math>\mathrm{e}p\cdot(d+1)\le 1</math>;
:#<math>\mathrm{e}p\cdot(d+1)\le 1</math>;
then it is easy to verify that for all <math>1\le i\le n</math>,
then it is easy to verify that for all <math>1\le i\le n</math>,
:<math>\Pr[A_i]\le p\le\frac{1}{e(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
:<math>\Pr[A_i]\le p\le\frac{1}{\mathrm{e}(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{A_j\in\Gamma(A_i)}(1-x_j)</math>.
Due to the general LLL, we have
Due to the general LLL, we have
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>.
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>.
Line 57: Line 60:
Alternatively, by setting <math>x_i=\frac{1}{d}</math> and assuming without loss of generality that the maximum degree of the dependency graph has <math>d\ge 2</math>, we can have another symmetric version of the local lemma.
Alternatively, by setting <math>x_i=\frac{1}{d}</math> and assuming without loss of generality that the maximum degree of the dependency graph has <math>d\ge 2</math>, we can have another symmetric version of the local lemma.
{{Theorem
{{Theorem
|Lovász Local Lemma|
|Lovász Local Lemma (symmetric case, alternative form)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that there is a <math>p\in[0,1)</math> such that the followings are satisfied:
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that there is a <math>p\in[0,1)</math> such that the followings are satisfied:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
Line 66: Line 69:
}}
}}


The original proof of the Lovász Local Lemma is by induction.
The original proof of the Lovász Local Lemma is by induction. See [[高级算法 (Fall 2016)/Nonconstructive Proof of Lovász Local Lemma| this note]] for the original non-constructive proof of Lovász Local Lemma.


=Random Search for <math>k</math>-SAT=
=Random Search for <math>k</math>-SAT=
We start by giving the definition of <math>k</math>-CNF and <math>k</math>-SAT.
We start by giving the definition of <math>k</math>-CNF and <math>k</math>-SAT.
{{Theorem|Definition (exact-<math>k</math>-CNF)|
{{Theorem|Definition (exact-<math>k</math>-CNF)|
:A logic expression <math>\phi</math> defined on <math>n</math> Boolean variables <math>x_1,x_2,\ldots,x_n\in\{\mathrm{true},\mathrm{false}\}</math> is said to be a '''conjunctive normal form (CNF)''' if <math>\phi</math> can be written as a conjunction(AND) of '''clauses''' as <math>\phi=C_1\wedge C_2\wedge\cdots\wedge C_m</math>, where each clause <math>C_i=\ell_{i_1}\vee \ell_{i_2}\vee\cdots\vee\ell_{i_k}</math> is a disjunction(OR) of '''literals''', where every literal <math>\ell_j</math> is either a variable <math>x_i</math> or the negation <math>\neg x_i</math> of a variable.
:A logic expression <math>\phi</math> defined on <math>n</math> Boolean variables <math>x_1,x_2,\ldots,x_n\in\{\mathrm{true},\mathrm{false}\}</math> is said to be a '''conjunctive normal form (CNF)''' if:
:*We call a CNF formula a '''exact-<math>k</math>-CNF''' if every clause consists of ''exact'' <math>k</math> ''distinct'' literals.  
:* <math>\phi</math> can be written as a conjunction(AND) of '''clauses''' as <math>\phi=C_1\wedge C_2\wedge\cdots\wedge C_m</math>;
:* each clause <math>C_i=\ell_{i_1}\vee \ell_{i_2}\vee\cdots\vee\ell_{i_k}</math> is a disjunction(OR) of '''literals''';
:* each literal <math>\ell_j</math> is either a variable <math>x_i</math> or the negation <math>\neg x_i</math> of a variable.
:We call a CNF formula '''<math>k</math>-CNF''', or more precisely an '''exact-<math>k</math>-CNF''', if every clause consists of ''exact'' <math>k</math> ''distinct'' literals.  
}}
}}
For example:
For example:
Line 78: Line 84:
(x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3\vee x_4)\wedge (x_1\vee x_2\vee x_4)\wedge (x_2\vee x_3\vee \neg x_4)
(x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3\vee x_4)\wedge (x_1\vee x_2\vee x_4)\wedge (x_2\vee x_3\vee \neg x_4)
</math>
</math>
is an exact-<math>3</math>-CNF formula.
is a <math>3</math>-CNF formula by above definition.


;Remark
;Remark
:The notion of exact-<math>k</math>-CNF is slightly more restrictive than the <math>k</math>-CNF, where each clause consists of ''at most'' <math>k</math> variables. The discussion of the subtle differences between these two definitions can be found [https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#3-satisfiability here].
:The notion of <math>k</math>-CNF defined here is slightly more restrictive than the standard definition of <math>k</math>-CNF, where each clause consists of ''at most'' <math>k</math> variables. See [https://en.wikipedia.org/wiki/Boolean_satisfiability_problem#3-satisfiability here] for a discussion of the subtle differences between these two definitions.


A logic expression <math>\phi</math> is said to be '''satisfiable''' if there is an assignment of values of true or false to the variables <math>\boldsymbol{x}=(x_1,x_2,\ldots,x_n)</math> so that <math>\phi(\boldsymbol{x})</math> is true. For a CNF <math>\phi</math>, this mean that there is a truth assignment that satisfies all clauses in <math>\phi</math> simultaneously.
A logic expression <math>\phi</math> is said to be '''satisfiable''' if there is an assignment of values of true or false to the variables <math>\boldsymbol{x}=(x_1,x_2,\ldots,x_n)</math> so that <math>\phi(\boldsymbol{x})</math> is true. For a CNF <math>\phi</math>, this mean that there is an assignment that satisfies all clauses in <math>\phi</math> simultaneously.


The '''exact-<math>k</math>-satisfiability (exact-<math>k</math>-SAT)''' problem is that given as input an exact-<math>k</math>-CNF formula <math>\phi</math> decide whether <math>\phi</math> is satisfiable.  
The '''<math>k</math>-satisfiability (<math>k</math>-SAT)''' problem is that given as input a <math>k</math>-CNF formula <math>\phi</math> decide whether <math>\phi</math> is satisfiable.  
{{Theorem|exact-<math>k</math>-SAT|
{{Theorem|<math>k</math>-SAT|
:'''Input:''' an exact-<math>k</math>-CNF formula <math>\phi</math>.
:'''Input:''' a <math>k</math>-CNF formula <math>\phi</math>.
:'''Output:''' whether <math>\phi</math> is satisfiable.
:Determines whether <math>\phi</math> is satisfiable.
}}
}}
It is well known that <math>k</math>-SAT is '''NP-complete''' for any <math>k\ge 3</math>. The same also holds for the exact-<math>k</math>-SAT.
It is well known that <math>k</math>-SAT is '''NP-complete''' for any <math>k\ge 3</math>.


== Satisfiability of exact-<math>k</math>-CNF==
== Satisfiability of <math>k</math>-CNF==
Inspired by the Lovasz local lemma, we now consider the dependencies between clauses in a CNF formula.  
As in the Lovasz local lemma, we consider the ''dependencies'' between clauses in a CNF formula.  


Given a CNF formula <math>\phi</math> defined over Boolean variables <math>\mathcal{X}=\{x_1,x_2,\ldots,x_n\}</math> and a clause <math>C</math> in <math>\phi</math>, we use <math>\mathsf{vbl}(C)\subseteq\mathcal{X}</math> to denote the set of variables that appear in <math>C</math>.
We say that a CNF formula <math>\phi</math> has '''maximum degree''' at most <math>d</math> if every clause in <math>\phi</math> shares variables with at most <math>d</math> other clauses in <math>\phi</math>.
For a clause <math>C</math> in a CNF formula <math>\phi</math>, its '''degree''' <math>d(C)=|\{D\neq C\mid \mathsf{D}\cap\mathsf{C}\neq\emptyset\}|</math> is the number of other clauses in <math>\phi</math> that share variables with <math>C</math>. The '''maximum degree''' <math>d</math> of a CNF formula <math>\phi</math> is <math>d=\max_{C\text{ in }\phi}d(C)</math>.


By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of exact-<math>k</math>-CNF with bounded degree.
By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of <math>k</math>-CNF with bounded degree.
{{Theorem|Theorem|
{{Theorem|Theorem|
:Let <math>\phi</math> be an exact-<math>k</math>-CNF formula with maximum degree <math>d</math>. If <math>d+1\le 2^{k}/\mathrm{e}</math> then <math>\phi</math> is always satisfiable.
:Let <math>\phi</math> be a <math>k</math>-CNF formula with maximum degree at most <math>d</math>. If <math>d\le 2^{k-2}</math> then <math>\phi</math> is always satisfiable.
}}
}}
{{Proof|
{{Proof|
Let <math>X_1,X_2,\ldots,X_n</math> be Boolean random variables sampled uniformly and independently from <math>\{\text{true},\text{false}\}</math>. We are going to show that <math>\phi</math> is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for <math>\phi</math>.
Let <math>X_1,X_2,\ldots,X_n</math> be Boolean random variables sampled uniformly and independently from <math>\{\text{true},\text{false}\}</math>. We are going to show that <math>\phi</math> is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for <math>\phi</math>.


Suppose there are <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> in <math>\phi</math>. Let <math>A_i</math> denote the bad event that <math>C_i</math> is not satisfied by the random assignment <math>X_1,X_2,\ldots,X_n</math>. Clearly, each <math>A_i</math> is dependent with at most <math>d</math> other <math>A_j</math>'s. And our goal is to show that
Suppose there are <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> in <math>\phi</math>. Let <math>A_i</math> denote the bad event that <math>C_i</math> is not satisfied by the random assignment <math>X_1,X_2,\ldots,X_n</math>. Clearly, each <math>A_i</math> is dependent with at most <math>d</math> other <math>A_j</math>'s, which means the maximum degree of the dependency graph for <math>A_1, A_2, \ldots, A_m</math> is at most <math>d</math>.
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>.


Recall that in an exact-<math>k</math>-CNF, every clause <math>C_i</math> consists of exact <math>k</math> variable, and is violated by precisely one local assignment among all <math>2^k</math> possibilities. Thus the probability that <math>C_i</math> is not satisfied is <math>\Pr[A_i]=2^{-k}</math>.
Recall that in a <math>k</math>-CNF <math>\phi</math>, every clause <math>C_i</math> consists of precisely <math>k</math> variable, and <math>C_i</math> is violated by only one assignment among all <math>2^k</math> assignments of the <math>k</math> variables in <math>C_i</math>. Therefore, the probability of <math>C_i</math> being violated is <math>p=\Pr[A_i]=2^{-k}</math>.


Assuming that <math>d+1\le 2^{k}/\mathrm{e}</math>, i.e. <math>\mathrm{e}(d+1)2^{-k}\le 1</math>, by the Lovasz local lemma (symmetric case), we have
If <math>d\le 2^{k-2}</math>, that is, <math>4pd\le 1</math>, then due to Lovasz local lemma (symmetric case, alternative form), it holds that
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>.
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>.
The existence of satisfying assignment follows by the probabilistic method.
}}
}}


== The random search algorithm ==
== Moser's recursive fix algorithm ==
The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not specify a way to find this solution. Next we give a simple randomized algorithm and prove it can find the satisfying solution efficiently under a slightly stronger assumption than the Lovasz local lemma.  
The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not gives a way to find such solution.  


Given as input a CNF formula <math>\phi</math> defined on Boolean variables <math>\mathcal{X}=\{x_1,x_2,\ldots,x_n\}</math>, recall that for a clause <math>C</math> in a CNF <math>\phi</math>, we use <math>\mathsf{vbl}(C)\subseteq\mathcal{X}</math> to denote the set of variables on which <math>C</math> is defined.  
In 2009, Moser gave a very simple randomized algorithm which efficiently finds a satisfying assignment with high probability under the condition <math>d\le 2^{k-5}</math>.


The following algorithm is due to Moser in 2009. The algorithm consists of two components: the main function ''Solve''() and a sub-routine ''Fix''().
We need the following notations. Given as input a CNF formula <math>\phi</math>:
* Let <math>\mathcal{X}=\{x_1,x_2,\ldots,x_n\}</math> be the set of Boolean variables on which <math>\phi</math> is defined.
* For each clause <math>C</math> in <math>\phi</math>, we denote by <math>\mathsf{vbl}(C)\subseteq\mathcal{X}</math> the set of variables on which <math>C</math> is defined.
* We also abuse the notation and denote by <math>\Gamma(C)=\{\text{clause }D\text{ in }\phi\mid D\neq C, \mathsf{vbl}(C)\cap\mathsf{vbl}(D)\neq\phi\}</math> the '''neighborhood''' of <math>C</math>, i.e. the set of ''other'' clauses in <math>\phi</math> that shares variables with <math>C</math>, and <math>\Gamma^+(C)=\Gamma(C)\cup\{C\}</math> the '''inclusive neighborhood''' of <math>C</math>, i.e. the set of all clauses, including <math>C</math> itself, that share variables with <math>C</math>.


The algorithm consists of two functions: the main function ''Solve''() and a recursive sub-routine ''Fix''().
{{Theorem
{{Theorem
|Solve(CNF <math>\phi</math>)|
|Solve(CNF <math>\phi</math>)|
:Pick values of <math>x_1,x_2\ldots,x_n</math> uniformly and independently at random;
:Pick values of <math>x_1,x_2\ldots,x_n</math> uniformly and independently at random;
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
:while there is an ''unsatisfied'' clause <math>C</math> in <math>\phi</math>
:: '''Fix'''(<math>C</math>);
:: '''Fix'''(<math>C</math>);
}}
}}
Line 131: Line 140:
{{Theorem
{{Theorem
|Fix(Clause <math>C</math>)|
|Fix(Clause <math>C</math>)|
:Replace the values of variables in <math>\mathsf{vbl}(C)</math> with new uniform and independent random values;
:Replace the values of variables in <math>\mathsf{vbl}(C)</math> with uniform and independent random values;
:While there is ''unsatisfied'' clause <math>D</math> (including <math>C</math> itself) that <math>\mathsf{vbl}(C)\cap \mathsf{vbl}(D)\neq \emptyset</math>
:while there is ''unsatisfied'' clause <math>D\in\Gamma^+(C)</math>
:: '''Fix'''(<math>D</math>);
:: '''Fix'''(<math>D</math>);
}}
}}


It is an amazing discovery that this simple algorithm works well as long as the condition of Lovasz local lemma is satisfied. Here we prove a slightly weaker statement for the convenience of analysis.
It is quite amazing to see that this simple algorithm works very well.


{{Theorem|Theorem|
{{Theorem|Theorem|
:Let <math>\phi</math> be an exact-<math>k</math>-CNF formula with maximum degree <math>d</math>.  
:Let <math>\phi</math> be a <math>k</math>-CNF formula with maximum degree at most <math>d</math>.  
:There is a <math>d_0=\Theta(2^{k})</math> such that if <math>d+1< d_0</math> then the algorithm ''Solve''(<math>\phi</math>) finds a satisfying assignment for <math>\phi</math> in time <math>O(n+km\log m)</math> with high probability.
:There is a universal constant <math>c>0</math>, such that if <math>d< 2^{k-c}</math> then the algorithm ''Solve''(<math>\phi</math>) finds a satisfying assignment for <math>\phi</math> in time <math>O(n+km\log m)</math> with high probability.
}}
Note that in the Lovasz local lemma, the above <math>d_0</math> is <math>d_0=2^{k}/\mathrm{e}</math>. So this theorem archives asymptotically the same bound as the Lovasz local lemma.
 
The analysis is based on a technique called '''''entropy compression'''''. This is a very clever idea and may be very different from what you might have seen so far about algorithm analysis.
We first give a high-level description of this idea:
* We use <math>\mathsf{Alg}(r, \phi)</math> to abstractly denote an algorithm <math>\mathsf{Alg}</math> running on an input <math>\phi</math> with random bits <math>r\in\{0,1\}^*</math>. For an algorithm with no access to the random bits <math>r</math>, once the input <math>\phi</math> is fixed, the behavior of the algorithm as well as its output is ''deterministic''. But for ''randomized'' algorithms, the behavior of <math>\mathsf{Alg}(r, \phi)</math> is a random variable even when the input <math>\phi</math> is fixed.
* Fix an arbitrary (worst-case) input <math>\phi</math>. We try to construct a ''succinct representation'' <math>c</math> of the behavior of <math>\mathsf{Alg}(r, \phi)</math> in such a manner that the random bits <math>r</math> can always be fully recovered from this succinct representation <math>c</math>. In other words, <math>\mathsf{Alg}(r, \phi)</math> gives an encoding (a 1-1 mapping) of the random bits <math>r</math> to a succinct representation <math>c</math>.
* It is a fundamental law that random bits cannot be compressed significantly by any encoding. Therefore if a longer running time of <math>\mathsf{Alg}(r, \phi)</math> would imply that the random bits <math>r</math> can be encoded to a succinct representation <math>c</math> which is much shorter than <math>r</math>, then we prove the running time of the algorithm <math>\mathsf{Alg}(r, \phi)</math> cannot be too long.
:* A natural way to reach this last contradiction is to have the following situation: As the running time of <math>\mathsf{Alg}(r, \phi)</math> grows, naturally both lengths of random bits <math>r</math> and the succinct representation <math>c</math> of the behavior of <math>\mathsf{Alg}(r, \phi)</math> grow. So if the former grows much faster than the latter as the running time grows, then a large running time may cause the length of <math>r</math> significantly greater than the length of <math>c</math>.
 
---------
We now proceed to the analysis of <math>\text{Solve}(\phi)</math> and prove the above theorem.
 
From now on we assume that the input instance <math>\phi</math> is an arbitrarily fixed exact-<math>k</math>-CNF formula with maximum degree <math>d</math>:
*Let <math>T</math> denote the total number of time the function <math>\text{Fix}()</math> is being called (including both the calls in <math>\text{Solve}(\phi)</math> and the recursive calls).
*Let <math>t=\min\{T,2^m\}</math>.
Note that <math>T</math> and <math>t</math> are random variables which depend solely on the random bits used by the algorithm (after the input <math>\phi</math> being fixed).
We are going to show that <math>t=O(m\log m)</math> with high probability, which means <math>T=O(m\log m)</math> with high probability.
 
The reason we consider <math>t=\min\{T,2^m\}</math> is that we are not sure whether <math>T</math> is finite or infinite, so we "truncate" <math>T</math> if it becomes too large. This truncation threshold is quite arbitrary as long as it is significantly  greater than <math>m\log m</math>.
 
We start by making the following simple observations:
*A clause <math>C</math> in <math>\phi</math> will be satisfied after  '''Fix'''(<math>C</math>) returned and will remain as being satisfied afterwards.
*At the moment a '''Fix'''(<math>C</math>) being called, the clause  <math>C</math> must be unsatisfied.
 
{{Theorem|Proposition|
# There are at most <math>m</math> top-level callings to  '''Fix'''(), where <math>m</math> is the total number of clauses in <math>\phi</math>.
# If a '''Fix'''(<math>C</math>) is being called, the values of variables in <math>C</math> are uniquely determined.
}}
}}


== Analysis of Moser's algorithm by entropy compression==


----------
= Constructive Proof of General ''LLL'' =
We consider a restrictive case.
* <math>\mathcal{X}</math> is a set of mutually independent random variables.
 
* <math>\mathcal{A}</math> is a set of events defined on variables in <math>\mathcal{X}</math>, where each <math>A\in\mathcal{A}</math> is a predicate defined on a subset of random variables in <math>\mathcal{X}</math>.
Let <math>X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\}</math> be a set of ''mutually independent'' random variables which assume boolean values. Each event <math>A_i</math> is an AND of at most <math>k</math> literals (<math>X_i</math> or <math>\neg X_i</math>). Let <math>v(A_i)</math> be the set of the <math>k</math> variables that <math>A_i</math> depends on. The probability that none of the bad events occurs is
* For each <math>A\in\mathcal{A}</math>, denote by <math>\mathsf{vbl}(A)</math> the set of variables on which <math>A</math> is defined.
:<math>
* For each <math>A\in\mathcal{A}</math>, the '''neighborhood''' of <math>A</math>, denoted by <math>\Gamma(A)</math>, is defined as
\Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right].
::<math>\Gamma(A)=\{B\in\mathcal{A}\mid B\neq A, \mathsf{vbl}(A)\cap\mathsf{vbl}(B)\neq\emptyset\}</math>;
</math>
*:and let <math>\Gamma^+(A)=\Gamma(A)\cup \{A\}</math> be the '''inclusive neighborhood''' of <math>A</math>.
In this particular model, the dependency graph <math>D=(V,E)</math> is defined as that <math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.
 
Observe that <math>\overline{A_i}</math> is a clause (OR of literals). Thus, <math>\bigwedge_{i=1}^n \overline{A_i}</math> is a '''<math>k</math>-CNF''', the CNF that each clause depends on <math>k</math> variables.
The probability
:<math>
\bigwedge_{i=1}^n \overline{A_i}>0
</math>
means that the the <math>k</math>-CNF <math>\bigwedge_{i=1}^n \overline{A_i}</math> is satisfiable.
 
The satisfiability of <math>k</math>-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first '''NP-complete''' problem (the Cook-Levin theorem). Given the current suspect on '''NP''' vs '''P''', we do not expect to solve this problem generally.
 
However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most <math>d</math> other clauses. We call a <math>k</math>-CNF with this property a <math>k</math>-CNF with bounded degree <math>d</math>.
 
Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:
;Problem
:Find a condition on <math>k</math> and <math>d</math>, such that any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable.
 
In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.  


Let <math>\phi</math> be a <math>k</math>-CNF of <math>n</math> clauses with bounded degree <math>d</math>, defined on variables <math>X_1,\ldots,X_m</math>. The following procedure find a satisfiable assignment for <math>\phi</math>.
We still interpret the events in <math>\mathcal{A}</math> as a series of bad events, and we want to make sure it is possible to avoid the occurrences of all bad events in <math>\mathcal{A}</math> simultaneously. Furthermore, we want to give an algorithm which can efficiently find an evaluation of the random variables in <math>\mathcal{X}</math> to make none of the bad events in <math>\mathcal{A}</math> occur.


{{Theorem
== The Moser-Tardos random solver ==
|Solve(<math>\phi</math>)|
We make the following assumptions:
:Pick a random assignment of <math>X_1,\ldots,X_m</math>.
* It is efficient to draw independent samples for every random variable <math>X\in\mathcal{X}</math> according to its distribution.
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
* It is efficient to evaluate whether <math>A</math> occurs on an evaluation of random variables in <math>\mathsf{vbl}(A)</math>.
:: '''Fix'''(<math>C</math>).
}}


The sub-routine '''Fix''' is defined as follows:
{{Theorem
{{Theorem
|Fix(<math>C</math>)|
|Moser-Tardos Algorithm|
:Replace the variables in <math>v(C)</math> with new random values.
:Sample all <math>X\in\mathcal{X}</math> independently;
:While there is unsatisfied clause <math>D</math> that <math>v(C)\cap v(D)\neq \emptyset</math>
:while there is a bad event <math>A\in\mathcal{A}</math> that occurs
:: '''Fix'''(<math>D</math>).
::resample all <math>X\in\mathsf{vbl}(A)</math>;
}}
}}


The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.
{{Theorem|Theorem (Moser-Tardos 2010)|
 
:If there is an <math>\alpha:\mathcal{A}\to[0,1)</math> such that for every <math>A\in\mathcal{A}</math>
We then prove it works.
::<math>\Pr[A]\le \alpha(A)\prod_{B\in\Gamma(A)}(1-\alpha(B))</math>,
 
:then the Moser-Tardos algorithm returns an evaluation of random variables in <math>\mathcal{X}</math> satisfying <math>\bigwedge_{A\in\mathcal{A}}\overline{A}</math> using at most <math>\sum_{A\in\mathcal{A}}\frac{\alpha(A)}{1-\alpha(A)}</math> resamples in expectation.
===Number of top-level callings of Fix ===
In '''Solve'''(<math>\phi</math>), the subroutine '''Fix'''(<math>C</math>) is called. We now upper bound the number of times it is called (not including the recursive calls).
 
Assume '''Fix'''(<math>C</math>) always terminates.
:;Observation
::Every clause that was satisfied before '''Fix'''(<math>C</math>) was called will still remain satisfied and <math>C</math> will also be satisfied after '''Fix'''(<math>C</math>) returns.
 
The observation can be proved by induction on the structure of recursion.  Since there are <math>n</math> clauses, '''Solve'''(<math>\phi</math>) makes at most <math>n</math> calls to '''Fix'''.
 
We then prove that '''Fix'''(<math>C</math>) terminates.
 
=== Termination of Fix ===
The idea of the proof is to '''reconstruct''' a random string.
 
Suppose that during the running of '''Solve'''(<math>\phi</math>), the '''Fix''' subroutine is called for <math>t</math> times (including all the recursive calls).
 
Let <math>s</math> be the sequence of the random bits used by '''Solve'''(<math>\phi</math>). It is easy to see that the length of <math>s</math> is <math>|s|=m+tk</math>, because the initial random assignment of <math>m</math> variables takes <math>m</math> bits, and each time of calling '''Fix''' takes <math>k</math> bits.
 
We then reconstruct <math>s</math> in an alternative way.
 
Recall that '''Solve'''(<math>\phi</math>) calls '''Fix'''(<math>C</math>) at top-level for at most <math>n</math> times. Each calling of '''Fix'''(<math>C</math>) defines a recursion tree, rooted at clause <math>C</math>, and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of '''Solve'''(<math>\phi</math>) can be described by at most <math>n</math> recursion trees.
 
:;Observation 1
::Fix a <math>\phi</math>. The <math>n</math> recursion trees which capture the total running history of '''Solve'''(<math>\phi</math>) can be encoded in <math>n\log n+t(\log d+O(1))</math> bits.
Each root node corresponds to a clause. There are <math>n</math> clauses in <math>\phi</math>. The <math>n</math> root nodes can be represented in <math>n\log n</math> bits.
 
The smart part is how to encode the branches of the tree. Note that '''Fix'''(<math>C</math>) will call '''Fix'''(<math>D</math>) only for the <math>D</math> that shares variables with <math>C</math>. For a k-CNF with bounded degree <math>d</math>, each clause <math>C</math> can share variables with at most <math>d</math> many other clauses. Thus, each branch in the recursion tree can be represented  in <math>\log d</math> bits. There are extra <math>O(1)</math> bits needed to denote whether the recursion ends. So totally  <math>n\log n+t(\log d+O(1))</math> bits are sufficient to encode all <math>n</math> recursion trees.
 
:;Observation 2
::The random sequence <math>s</math> can be encoded in <math>m+n\log n+t(\log d+O(1))</math> bits.
 
With <math>n\log n+t(\log d+O(1))</math> bits, the structure of all the recursion trees can be encoded. With extra <math>m</math> bits, the final assignment of the <math>m</math>
variables is stored.
 
We then observe that with these information, the sequence of the random bits <math>s</math> can be reconstructed from backwards from the final assignment.
 
The key step is that a clause <math>C</math> is only fixed when it is unsatisfied (obvious), and an unsatisfied clause <math>C</math> must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the <math>k</math> random bits in the random sequence <math>s</math> used in the call of the Fix corresponding to the node. Therefore, <math>s</math> can be reconstructed from the final assignment plus at most <math>n</math> recursion trees, which can be encoded in at most <math>m+n\log n+t(\log d+O(1))</math> bits.
 
The following theorem lies in the heart of the '''Kolmogorov complexity'''. The theorem states that random sequence is '''incompressible'''.
{{Theorem
|Theorem (Kolmogorov)|
:For any encoding scheme , with high probability, a random sequence <math>s</math> is encoded in at least <math>|s|</math> bits.
}}
}}


Applying the theorem, we have that with high probability,
== Analysis of the Moser-Tardos algorithm by the witness tree==
:<math>m+n\log n+t(\log d+O(1))\ge |s|=m+tk</math>.
Therefore,
:<math>
t(k-O(1)-\log d)\le n\log n.
</math>
In order to bound <math>t</math>, we need
:<math>k-O(1)-\log d>0</math>,
which hold for <math>d< 2^{k-\alpha}</math> for some constant <math>\alpha>0</math>. In fact, in this case, <math>t=O(n\log n)</math>, the running time of the procedure is bounded by a polynomial!
 
=== Back to the local lemma ===
We showed that for <math>d<2^{k-O(1)}</math>, any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.
 
Recall that the symmetric version of the local lemma:
{{Theorem
|Theorem (The local lemma: symmetric case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and
:::<math>ep(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens <math>\overline{A_i}</math> are clauses defined on <math>k</math> variables. Then,
:<math>
p=2^{-k}
</math>
thus, the condition <math>ep(d+1)\le 1</math> means that
:<math>
d<2^{k}/e
</math>
which means the Moser's procedure is asymptotically optimal on the degree of dependency.

Latest revision as of 11:59, 9 October 2016

 Under construction. 

Lovász Local Lemma

Assume that [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] are "bad" events. We are looking at the "rare event" that none of these bad events occurs, formally, the event [math]\displaystyle{ \bigwedge_{i=1}^n\overline{A_i} }[/math]. How can we guarantee this rare event occurs with positive probability? The Lovász Local Lemma provides an answer to this fundamental question by using the information about the dependencies between the bad events.

The dependency graph

The notion of mutual independence between an event and a set of events is formally defined as follows.

Definition (mutual independence)
An event [math]\displaystyle{ A }[/math] is said to be mutually independent of events [math]\displaystyle{ B_1,B_2,\ldots, B_k }[/math], if for any disjoint [math]\displaystyle{ I^+,I^-\subseteq\{1,2,\ldots,k\} }[/math], it holds that
[math]\displaystyle{ \Pr\left[A \mid \left(\bigwedge_{i\in I^+}B_i\right) \wedge \left(\bigwedge_{i\in I^-}\overline{B_i}\right)\right]=\Pr[A] }[/math].

Given a sequence of events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math], we use the dependency graph to describe the dependencies between these events.

Definition (dependency graph)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a sequence of events. A graph [math]\displaystyle{ D=(V,E) }[/math] on the set of vertices [math]\displaystyle{ V=\{1,2,\ldots,n\} }[/math] is called a dependency graph for the events [math]\displaystyle{ A_1,\ldots,A_n }[/math] if for each [math]\displaystyle{ i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], the event [math]\displaystyle{ A_i }[/math] is mutually independent of all the events [math]\displaystyle{ \{A_j\mid (i,j)\not\in E\} }[/math].
Furthermore, for each event [math]\displaystyle{ A_i }[/math]:
  • define [math]\displaystyle{ \Gamma(A_i)=\{A_j\mid (i,j)\in E\} }[/math] as the neighborhood of event [math]\displaystyle{ A_i }[/math] in the dependency graph;
  • define [math]\displaystyle{ \Gamma^+(A_i)=\Gamma(A_i)\cup\{A_i\} }[/math] as the inclusive neighborhood of [math]\displaystyle{ A_i }[/math], i.e. the set of events adjacent to [math]\displaystyle{ A_i }[/math] in the dependency graph, including [math]\displaystyle{ A_i }[/math] itself.
Example
Let [math]\displaystyle{ X_1,X_2,\ldots,X_m }[/math] be a set of mutually independent random variables. Each event [math]\displaystyle{ A_i }[/math] is a predicate defined on a number of variables among [math]\displaystyle{ X_1,X_2,\ldots,X_m }[/math]. Let [math]\displaystyle{ v(A_i) }[/math] be the unique smallest set of variables which determine [math]\displaystyle{ A_i }[/math]. The dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined by
[math]\displaystyle{ (i,j)\in E }[/math] iff [math]\displaystyle{ v(A_i)\cap v(A_j)\neq \emptyset }[/math].

The local lemma

The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.

Lovász Local Lemma (symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that there is a [math]\displaystyle{ p\in[0,1) }[/math] such that the followings are satisfied:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ \mathrm{e}p\cdot (d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

The following is a general asymmetric version of the local lemma. This generalization is due to Spencer.

Lovász Local Lemma (general case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a sequence of events. Suppose there exist real numbers [math]\displaystyle{ x_1,x_2,\ldots, x_n }[/math] such that [math]\displaystyle{ 0\le x_i\lt 1 }[/math] and for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ \Pr[A_i]\le x_i\prod_{A_j\in \Gamma(A_i)}(1-x_j) }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i) }[/math].

To see that the general LLL implies symmetric LLL, we set [math]\displaystyle{ x_i=\frac{1}{d+1} }[/math] for all [math]\displaystyle{ i=1,2,\ldots,n }[/math]. Then we have [math]\displaystyle{ \left(1-\frac{1}{d+1}\right)^d\gt \frac{1}{\mathrm{e}} }[/math].

Assume the condition in the symmetric LLL:

  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. [math]\displaystyle{ \mathrm{e}p\cdot(d+1)\le 1 }[/math];

then it is easy to verify that for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \Pr[A_i]\le p\le\frac{1}{\mathrm{e}(d+1)}\lt \frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{A_j\in\Gamma(A_i)}(1-x_j) }[/math].

Due to the general LLL, we have

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n\gt 0 }[/math].

This proves the symmetric LLL.

Alternatively, by setting [math]\displaystyle{ x_i=\frac{1}{d} }[/math] and assuming without loss of generality that the maximum degree of the dependency graph has [math]\displaystyle{ d\ge 2 }[/math], we can have another symmetric version of the local lemma.

Lovász Local Lemma (symmetric case, alternative form)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that there is a [math]\displaystyle{ p\in[0,1) }[/math] such that the followings are satisfied:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ 4p d\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

The original proof of the Lovász Local Lemma is by induction. See this note for the original non-constructive proof of Lovász Local Lemma.

Random Search for [math]\displaystyle{ k }[/math]-SAT

We start by giving the definition of [math]\displaystyle{ k }[/math]-CNF and [math]\displaystyle{ k }[/math]-SAT.

Definition (exact-[math]\displaystyle{ k }[/math]-CNF)
A logic expression [math]\displaystyle{ \phi }[/math] defined on [math]\displaystyle{ n }[/math] Boolean variables [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\{\mathrm{true},\mathrm{false}\} }[/math] is said to be a conjunctive normal form (CNF) if:
  • [math]\displaystyle{ \phi }[/math] can be written as a conjunction(AND) of clauses as [math]\displaystyle{ \phi=C_1\wedge C_2\wedge\cdots\wedge C_m }[/math];
  • each clause [math]\displaystyle{ C_i=\ell_{i_1}\vee \ell_{i_2}\vee\cdots\vee\ell_{i_k} }[/math] is a disjunction(OR) of literals;
  • each literal [math]\displaystyle{ \ell_j }[/math] is either a variable [math]\displaystyle{ x_i }[/math] or the negation [math]\displaystyle{ \neg x_i }[/math] of a variable.
We call a CNF formula [math]\displaystyle{ k }[/math]-CNF, or more precisely an exact-[math]\displaystyle{ k }[/math]-CNF, if every clause consists of exact [math]\displaystyle{ k }[/math] distinct literals.

For example:

[math]\displaystyle{ (x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3\vee x_4)\wedge (x_1\vee x_2\vee x_4)\wedge (x_2\vee x_3\vee \neg x_4) }[/math]

is a [math]\displaystyle{ 3 }[/math]-CNF formula by above definition.

Remark
The notion of [math]\displaystyle{ k }[/math]-CNF defined here is slightly more restrictive than the standard definition of [math]\displaystyle{ k }[/math]-CNF, where each clause consists of at most [math]\displaystyle{ k }[/math] variables. See here for a discussion of the subtle differences between these two definitions.

A logic expression [math]\displaystyle{ \phi }[/math] is said to be satisfiable if there is an assignment of values of true or false to the variables [math]\displaystyle{ \boldsymbol{x}=(x_1,x_2,\ldots,x_n) }[/math] so that [math]\displaystyle{ \phi(\boldsymbol{x}) }[/math] is true. For a CNF [math]\displaystyle{ \phi }[/math], this mean that there is an assignment that satisfies all clauses in [math]\displaystyle{ \phi }[/math] simultaneously.

The [math]\displaystyle{ k }[/math]-satisfiability ([math]\displaystyle{ k }[/math]-SAT) problem is that given as input a [math]\displaystyle{ k }[/math]-CNF formula [math]\displaystyle{ \phi }[/math] decide whether [math]\displaystyle{ \phi }[/math] is satisfiable.

[math]\displaystyle{ k }[/math]-SAT
Input: a [math]\displaystyle{ k }[/math]-CNF formula [math]\displaystyle{ \phi }[/math].
Determines whether [math]\displaystyle{ \phi }[/math] is satisfiable.

It is well known that [math]\displaystyle{ k }[/math]-SAT is NP-complete for any [math]\displaystyle{ k\ge 3 }[/math].

Satisfiability of [math]\displaystyle{ k }[/math]-CNF

As in the Lovasz local lemma, we consider the dependencies between clauses in a CNF formula.

We say that a CNF formula [math]\displaystyle{ \phi }[/math] has maximum degree at most [math]\displaystyle{ d }[/math] if every clause in [math]\displaystyle{ \phi }[/math] shares variables with at most [math]\displaystyle{ d }[/math] other clauses in [math]\displaystyle{ \phi }[/math].

By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of [math]\displaystyle{ k }[/math]-CNF with bounded degree.

Theorem
Let [math]\displaystyle{ \phi }[/math] be a [math]\displaystyle{ k }[/math]-CNF formula with maximum degree at most [math]\displaystyle{ d }[/math]. If [math]\displaystyle{ d\le 2^{k-2} }[/math] then [math]\displaystyle{ \phi }[/math] is always satisfiable.
Proof.

Let [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math] be Boolean random variables sampled uniformly and independently from [math]\displaystyle{ \{\text{true},\text{false}\} }[/math]. We are going to show that [math]\displaystyle{ \phi }[/math] is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for [math]\displaystyle{ \phi }[/math].

Suppose there are [math]\displaystyle{ m }[/math] clauses [math]\displaystyle{ C_1,C_2,\ldots,C_m }[/math] in [math]\displaystyle{ \phi }[/math]. Let [math]\displaystyle{ A_i }[/math] denote the bad event that [math]\displaystyle{ C_i }[/math] is not satisfied by the random assignment [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math]. Clearly, each [math]\displaystyle{ A_i }[/math] is dependent with at most [math]\displaystyle{ d }[/math] other [math]\displaystyle{ A_j }[/math]'s, which means the maximum degree of the dependency graph for [math]\displaystyle{ A_1, A_2, \ldots, A_m }[/math] is at most [math]\displaystyle{ d }[/math].

Recall that in a [math]\displaystyle{ k }[/math]-CNF [math]\displaystyle{ \phi }[/math], every clause [math]\displaystyle{ C_i }[/math] consists of precisely [math]\displaystyle{ k }[/math] variable, and [math]\displaystyle{ C_i }[/math] is violated by only one assignment among all [math]\displaystyle{ 2^k }[/math] assignments of the [math]\displaystyle{ k }[/math] variables in [math]\displaystyle{ C_i }[/math]. Therefore, the probability of [math]\displaystyle{ C_i }[/math] being violated is [math]\displaystyle{ p=\Pr[A_i]=2^{-k} }[/math].

If [math]\displaystyle{ d\le 2^{k-2} }[/math], that is, [math]\displaystyle{ 4pd\le 1 }[/math], then due to Lovasz local lemma (symmetric case, alternative form), it holds that

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\gt 0 }[/math].

The existence of satisfying assignment follows by the probabilistic method.

[math]\displaystyle{ \square }[/math]

Moser's recursive fix algorithm

The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not gives a way to find such solution.

In 2009, Moser gave a very simple randomized algorithm which efficiently finds a satisfying assignment with high probability under the condition [math]\displaystyle{ d\le 2^{k-5} }[/math].

We need the following notations. Given as input a CNF formula [math]\displaystyle{ \phi }[/math]:

  • Let [math]\displaystyle{ \mathcal{X}=\{x_1,x_2,\ldots,x_n\} }[/math] be the set of Boolean variables on which [math]\displaystyle{ \phi }[/math] is defined.
  • For each clause [math]\displaystyle{ C }[/math] in [math]\displaystyle{ \phi }[/math], we denote by [math]\displaystyle{ \mathsf{vbl}(C)\subseteq\mathcal{X} }[/math] the set of variables on which [math]\displaystyle{ C }[/math] is defined.
  • We also abuse the notation and denote by [math]\displaystyle{ \Gamma(C)=\{\text{clause }D\text{ in }\phi\mid D\neq C, \mathsf{vbl}(C)\cap\mathsf{vbl}(D)\neq\phi\} }[/math] the neighborhood of [math]\displaystyle{ C }[/math], i.e. the set of other clauses in [math]\displaystyle{ \phi }[/math] that shares variables with [math]\displaystyle{ C }[/math], and [math]\displaystyle{ \Gamma^+(C)=\Gamma(C)\cup\{C\} }[/math] the inclusive neighborhood of [math]\displaystyle{ C }[/math], i.e. the set of all clauses, including [math]\displaystyle{ C }[/math] itself, that share variables with [math]\displaystyle{ C }[/math].

The algorithm consists of two functions: the main function Solve() and a recursive sub-routine Fix().

Solve(CNF [math]\displaystyle{ \phi }[/math])
Pick values of [math]\displaystyle{ x_1,x_2\ldots,x_n }[/math] uniformly and independently at random;
while there is an unsatisfied clause [math]\displaystyle{ C }[/math] in [math]\displaystyle{ \phi }[/math]
Fix([math]\displaystyle{ C }[/math]);

The sub-routine Fix() is a recursive procedure:

Fix(Clause [math]\displaystyle{ C }[/math])
Replace the values of variables in [math]\displaystyle{ \mathsf{vbl}(C) }[/math] with uniform and independent random values;
while there is unsatisfied clause [math]\displaystyle{ D\in\Gamma^+(C) }[/math]
Fix([math]\displaystyle{ D }[/math]);

It is quite amazing to see that this simple algorithm works very well.

Theorem
Let [math]\displaystyle{ \phi }[/math] be a [math]\displaystyle{ k }[/math]-CNF formula with maximum degree at most [math]\displaystyle{ d }[/math].
There is a universal constant [math]\displaystyle{ c\gt 0 }[/math], such that if [math]\displaystyle{ d\lt 2^{k-c} }[/math] then the algorithm Solve([math]\displaystyle{ \phi }[/math]) finds a satisfying assignment for [math]\displaystyle{ \phi }[/math] in time [math]\displaystyle{ O(n+km\log m) }[/math] with high probability.

Analysis of Moser's algorithm by entropy compression

Constructive Proof of General LLL

  • [math]\displaystyle{ \mathcal{X} }[/math] is a set of mutually independent random variables.
  • [math]\displaystyle{ \mathcal{A} }[/math] is a set of events defined on variables in [math]\displaystyle{ \mathcal{X} }[/math], where each [math]\displaystyle{ A\in\mathcal{A} }[/math] is a predicate defined on a subset of random variables in [math]\displaystyle{ \mathcal{X} }[/math].
  • For each [math]\displaystyle{ A\in\mathcal{A} }[/math], denote by [math]\displaystyle{ \mathsf{vbl}(A) }[/math] the set of variables on which [math]\displaystyle{ A }[/math] is defined.
  • For each [math]\displaystyle{ A\in\mathcal{A} }[/math], the neighborhood of [math]\displaystyle{ A }[/math], denoted by [math]\displaystyle{ \Gamma(A) }[/math], is defined as
[math]\displaystyle{ \Gamma(A)=\{B\in\mathcal{A}\mid B\neq A, \mathsf{vbl}(A)\cap\mathsf{vbl}(B)\neq\emptyset\} }[/math];
  • and let [math]\displaystyle{ \Gamma^+(A)=\Gamma(A)\cup \{A\} }[/math] be the inclusive neighborhood of [math]\displaystyle{ A }[/math].

We still interpret the events in [math]\displaystyle{ \mathcal{A} }[/math] as a series of bad events, and we want to make sure it is possible to avoid the occurrences of all bad events in [math]\displaystyle{ \mathcal{A} }[/math] simultaneously. Furthermore, we want to give an algorithm which can efficiently find an evaluation of the random variables in [math]\displaystyle{ \mathcal{X} }[/math] to make none of the bad events in [math]\displaystyle{ \mathcal{A} }[/math] occur.

The Moser-Tardos random solver

We make the following assumptions:

  • It is efficient to draw independent samples for every random variable [math]\displaystyle{ X\in\mathcal{X} }[/math] according to its distribution.
  • It is efficient to evaluate whether [math]\displaystyle{ A }[/math] occurs on an evaluation of random variables in [math]\displaystyle{ \mathsf{vbl}(A) }[/math].
Moser-Tardos Algorithm
Sample all [math]\displaystyle{ X\in\mathcal{X} }[/math] independently;
while there is a bad event [math]\displaystyle{ A\in\mathcal{A} }[/math] that occurs
resample all [math]\displaystyle{ X\in\mathsf{vbl}(A) }[/math];
Theorem (Moser-Tardos 2010)
If there is an [math]\displaystyle{ \alpha:\mathcal{A}\to[0,1) }[/math] such that for every [math]\displaystyle{ A\in\mathcal{A} }[/math]
[math]\displaystyle{ \Pr[A]\le \alpha(A)\prod_{B\in\Gamma(A)}(1-\alpha(B)) }[/math],
then the Moser-Tardos algorithm returns an evaluation of random variables in [math]\displaystyle{ \mathcal{X} }[/math] satisfying [math]\displaystyle{ \bigwedge_{A\in\mathcal{A}}\overline{A} }[/math] using at most [math]\displaystyle{ \sum_{A\in\mathcal{A}}\frac{\alpha(A)}{1-\alpha(A)} }[/math] resamples in expectation.

Analysis of the Moser-Tardos algorithm by the witness tree