高级算法 (Fall 2016)/''Lovász'' Local Lemma: Difference between revisions
imported>Etone |
imported>Etone |
||
(30 intermediate revisions by the same user not shown) | |||
Line 53: | Line 53: | ||
:#<math>\mathrm{e}p\cdot(d+1)\le 1</math>; | :#<math>\mathrm{e}p\cdot(d+1)\le 1</math>; | ||
then it is easy to verify that for all <math>1\le i\le n</math>, | then it is easy to verify that for all <math>1\le i\le n</math>, | ||
:<math>\Pr[A_i]\le p\le\frac{1}{e(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{( | :<math>\Pr[A_i]\le p\le\frac{1}{\mathrm{e}(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{A_j\in\Gamma(A_i)}(1-x_j)</math>. | ||
Due to the general LLL, we have | Due to the general LLL, we have | ||
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>. | :<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>. | ||
Line 60: | Line 60: | ||
Alternatively, by setting <math>x_i=\frac{1}{d}</math> and assuming without loss of generality that the maximum degree of the dependency graph has <math>d\ge 2</math>, we can have another symmetric version of the local lemma. | Alternatively, by setting <math>x_i=\frac{1}{d}</math> and assuming without loss of generality that the maximum degree of the dependency graph has <math>d\ge 2</math>, we can have another symmetric version of the local lemma. | ||
{{Theorem | {{Theorem | ||
|Lovász Local Lemma| | |Lovász Local Lemma (symmetric case, alternative form)| | ||
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that there is a <math>p\in[0,1)</math> such that the followings are satisfied: | :Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that there is a <math>p\in[0,1)</math> such that the followings are satisfied: | ||
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>; | :#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>; | ||
Line 101: | Line 101: | ||
As in the Lovasz local lemma, we consider the ''dependencies'' between clauses in a CNF formula. | As in the Lovasz local lemma, we consider the ''dependencies'' between clauses in a CNF formula. | ||
We say that a CNF formula <math>\phi</math> has '''maximum degree''' at most <math>d</math> if every clause in <math>\phi</math> shares variables with at most <math>d</math> other clauses in <math>\phi</math>. | |||
By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of | By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of <math>k</math>-CNF with bounded degree. | ||
{{Theorem|Theorem| | {{Theorem|Theorem| | ||
:Let <math>\phi</math> be | :Let <math>\phi</math> be a <math>k</math>-CNF formula with maximum degree at most <math>d</math>. If <math>d\le 2^{k-2}</math> then <math>\phi</math> is always satisfiable. | ||
}} | }} | ||
{{Proof| | {{Proof| | ||
Let <math>X_1,X_2,\ldots,X_n</math> be Boolean random variables sampled uniformly and independently from <math>\{\text{true},\text{false}\}</math>. We are going to show that <math>\phi</math> is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for <math>\phi</math>. | Let <math>X_1,X_2,\ldots,X_n</math> be Boolean random variables sampled uniformly and independently from <math>\{\text{true},\text{false}\}</math>. We are going to show that <math>\phi</math> is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for <math>\phi</math>. | ||
Suppose there are <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> in <math>\phi</math>. Let <math>A_i</math> denote the bad event that <math>C_i</math> is not satisfied by the random assignment <math>X_1,X_2,\ldots,X_n</math>. Clearly, each <math>A_i</math> is dependent with at most <math>d</math> other <math>A_j</math>'s | Suppose there are <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> in <math>\phi</math>. Let <math>A_i</math> denote the bad event that <math>C_i</math> is not satisfied by the random assignment <math>X_1,X_2,\ldots,X_n</math>. Clearly, each <math>A_i</math> is dependent with at most <math>d</math> other <math>A_j</math>'s, which means the maximum degree of the dependency graph for <math>A_1, A_2, \ldots, A_m</math> is at most <math>d</math>. | ||
Recall that in | Recall that in a <math>k</math>-CNF <math>\phi</math>, every clause <math>C_i</math> consists of precisely <math>k</math> variable, and <math>C_i</math> is violated by only one assignment among all <math>2^k</math> assignments of the <math>k</math> variables in <math>C_i</math>. Therefore, the probability of <math>C_i</math> being violated is <math>p=\Pr[A_i]=2^{-k}</math>. | ||
If <math>d\le 2^{k-2}</math>, that is, <math>4pd\le 1</math>, then due to Lovasz local lemma (symmetric case, alternative form), it holds that | |||
:<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>. | :<math>\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]>0</math>. | ||
The existence of satisfying assignment follows by the probabilistic method. | |||
}} | }} | ||
== | == Moser's recursive fix algorithm == | ||
The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not | The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not gives a way to find such solution. | ||
In 2009, Moser gave a very simple randomized algorithm which efficiently finds a satisfying assignment with high probability under the condition <math>d\le 2^{k-5}</math>. | |||
We need the following notations. Given as input a CNF formula <math>\phi</math>: | |||
* Let <math>\mathcal{X}=\{x_1,x_2,\ldots,x_n\}</math> be the set of Boolean variables on which <math>\phi</math> is defined. | |||
* For each clause <math>C</math> in <math>\phi</math>, we denote by <math>\mathsf{vbl}(C)\subseteq\mathcal{X}</math> the set of variables on which <math>C</math> is defined. | |||
* We also abuse the notation and denote by <math>\Gamma(C)=\{\text{clause }D\text{ in }\phi\mid D\neq C, \mathsf{vbl}(C)\cap\mathsf{vbl}(D)\neq\phi\}</math> the '''neighborhood''' of <math>C</math>, i.e. the set of ''other'' clauses in <math>\phi</math> that shares variables with <math>C</math>, and <math>\Gamma^+(C)=\Gamma(C)\cup\{C\}</math> the '''inclusive neighborhood''' of <math>C</math>, i.e. the set of all clauses, including <math>C</math> itself, that share variables with <math>C</math>. | |||
The algorithm consists of two functions: the main function ''Solve''() and a recursive sub-routine ''Fix''(). | |||
{{Theorem | {{Theorem | ||
|Solve(CNF <math>\phi</math>)| | |Solve(CNF <math>\phi</math>)| | ||
:Pick values of <math>x_1,x_2\ldots,x_n</math> uniformly and independently at random; | :Pick values of <math>x_1,x_2\ldots,x_n</math> uniformly and independently at random; | ||
: | :while there is an ''unsatisfied'' clause <math>C</math> in <math>\phi</math> | ||
:: '''Fix'''(<math>C</math>); | :: '''Fix'''(<math>C</math>); | ||
}} | }} | ||
Line 137: | Line 140: | ||
{{Theorem | {{Theorem | ||
|Fix(Clause <math>C</math>)| | |Fix(Clause <math>C</math>)| | ||
:Replace the values of variables in <math>\mathsf{vbl}(C)</math> with | :Replace the values of variables in <math>\mathsf{vbl}(C)</math> with uniform and independent random values; | ||
: | :while there is ''unsatisfied'' clause <math>D\in\Gamma^+(C)</math> | ||
:: '''Fix'''(<math>D</math>); | :: '''Fix'''(<math>D</math>); | ||
}} | }} | ||
It is | It is quite amazing to see that this simple algorithm works very well. | ||
{{Theorem|Theorem| | {{Theorem|Theorem| | ||
:Let <math>\phi</math> be | :Let <math>\phi</math> be a <math>k</math>-CNF formula with maximum degree at most <math>d</math>. | ||
:There is a <math> | :There is a universal constant <math>c>0</math>, such that if <math>d< 2^{k-c}</math> then the algorithm ''Solve''(<math>\phi</math>) finds a satisfying assignment for <math>\phi</math> in time <math>O(n+km\log m)</math> with high probability. | ||
}} | }} | ||
== Analysis of Moser's algorithm by entropy compression== | |||
= Constructive Proof of General ''LLL'' = | |||
* <math>\mathcal{X}</math> is a set of mutually independent random variables. | |||
* <math>\mathcal{A}</math> is a set of events defined on variables in <math>\mathcal{X}</math>, where each <math>A\in\mathcal{A}</math> is a predicate defined on a subset of random variables in <math>\mathcal{X}</math>. | |||
* For each <math>A\in\mathcal{A}</math>, denote by <math>\mathsf{vbl}(A)</math> the set of variables on which <math>A</math> is defined. | |||
* For each <math>A\in\mathcal{A}</math>, the '''neighborhood''' of <math>A</math>, denoted by <math>\Gamma(A)</math>, is defined as | |||
::<math>\Gamma(A)=\{B\in\mathcal{A}\mid B\neq A, \mathsf{vbl}(A)\cap\mathsf{vbl}(B)\neq\emptyset\}</math>; | |||
*:and let <math>\Gamma^+(A)=\Gamma(A)\cup \{A\}</math> be the '''inclusive neighborhood''' of <math>A</math>. | |||
We still interpret the events in <math>\mathcal{A}</math> as a series of bad events, and we want to make sure it is possible to avoid the occurrences of all bad events in <math>\mathcal{A}</math> simultaneously. Furthermore, we want to give an algorithm which can efficiently find an evaluation of the random variables in <math>\mathcal{X}</math> to make none of the bad events in <math>\mathcal{A}</math> occur. | |||
== The Moser-Tardos random solver == | |||
We make the following assumptions: | |||
* It is efficient to draw independent samples for every random variable <math>X\in\mathcal{X}</math> according to its distribution. | |||
* It is efficient to evaluate whether <math>A</math> occurs on an evaluation of random variables in <math>\mathsf{vbl}(A)</math>. | |||
{{Theorem | {{Theorem | ||
| | |Moser-Tardos Algorithm| | ||
: | :Sample all <math>X\in\mathcal{X}</math> independently; | ||
: | :while there is a bad event <math>A\in\mathcal{A}</math> that occurs | ||
:: | ::resample all <math>X\in\mathsf{vbl}(A)</math>; | ||
}} | }} | ||
{{Theorem|Theorem (Moser-Tardos 2010)| | |||
{{Theorem | :If there is an <math>\alpha:\mathcal{A}\to[0,1)</math> such that for every <math>A\in\mathcal{A}</math> | ||
| | ::<math>\Pr[A]\le \alpha(A)\prod_{B\in\Gamma(A)}(1-\alpha(B))</math>, | ||
: | :then the Moser-Tardos algorithm returns an evaluation of random variables in <math>\mathcal{X}</math> satisfying <math>\bigwedge_{A\in\mathcal{A}}\overline{A}</math> using at most <math>\sum_{A\in\mathcal{A}}\frac{\alpha(A)}{1-\alpha(A)}</math> resamples in expectation. | ||
:: | |||
{ | |||
}} | }} | ||
== Analysis of the Moser-Tardos algorithm by the witness tree== | |||
Latest revision as of 11:59, 9 October 2016
Under construction.
Lovász Local Lemma
Assume that
The dependency graph
The notion of mutual independence between an event and a set of events is formally defined as follows.
Definition (mutual independence) - An event
is said to be mutually independent of events , if for any disjoint , it holds that .
- An event
Given a sequence of events
Definition (dependency graph) - Let
be a sequence of events. A graph on the set of vertices is called a dependency graph for the events if for each , , the event is mutually independent of all the events . - Furthermore, for each event
:- define
as the neighborhood of event in the dependency graph; - define
as the inclusive neighborhood of , i.e. the set of events adjacent to in the dependency graph, including itself.
- define
- Let
- Example
- Let
be a set of mutually independent random variables. Each event is a predicate defined on a number of variables among . Let be the unique smallest set of variables which determine . The dependency graph is defined by iff .
The local lemma
The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.
Lovász Local Lemma (symmetric case) - Let
be a set of events, and assume that there is a such that the followings are satisfied:- for all
, ; - the maximum degree of the dependency graph for the events
is , and
.
- for all
- Then
.
- Let
The following is a general asymmetric version of the local lemma. This generalization is due to Spencer.
Lovász Local Lemma (general case) - Let
be a sequence of events. Suppose there exist real numbers such that and for all , .
- Then
.
- Let
To see that the general LLL implies symmetric LLL, we set
Assume the condition in the symmetric LLL:
- for all
, ; ;
- for all
then it is easy to verify that for all
.
Due to the general LLL, we have
.
This proves the symmetric LLL.
Alternatively, by setting
Lovász Local Lemma (symmetric case, alternative form) - Let
be a set of events, and assume that there is a such that the followings are satisfied:- for all
, ; - the maximum degree of the dependency graph for the events
is , and
.
- for all
- Then
.
- Let
The original proof of the Lovász Local Lemma is by induction. See this note for the original non-constructive proof of Lovász Local Lemma.
Random Search for -SAT
We start by giving the definition of
Definition (exact- -CNF)- A logic expression
defined on Boolean variables is said to be a conjunctive normal form (CNF) if: can be written as a conjunction(AND) of clauses as ;- each clause
is a disjunction(OR) of literals; - each literal
is either a variable or the negation of a variable.
- We call a CNF formula
-CNF, or more precisely an exact- -CNF, if every clause consists of exact distinct literals.
- A logic expression
For example:
is a
- Remark
- The notion of
-CNF defined here is slightly more restrictive than the standard definition of -CNF, where each clause consists of at most variables. See here for a discussion of the subtle differences between these two definitions.
A logic expression
The
-SAT- Input: a
-CNF formula . - Determines whether
is satisfiable.
- Input: a
It is well known that
Satisfiability of -CNF
As in the Lovasz local lemma, we consider the dependencies between clauses in a CNF formula.
We say that a CNF formula
By the Lovasz local lemma, we almost immediately have the following theorem for the satisfiability of
Theorem - Let
be a -CNF formula with maximum degree at most . If then is always satisfiable.
- Let
Proof. Let
be Boolean random variables sampled uniformly and independently from . We are going to show that is satisfied by this random assignment with positive probability. Due to the probabilistic method, this will prove the existence of a satisfying assignment for .Suppose there are
clauses in . Let denote the bad event that is not satisfied by the random assignment . Clearly, each is dependent with at most other 's, which means the maximum degree of the dependency graph for is at most .Recall that in a
-CNF , every clause consists of precisely variable, and is violated by only one assignment among all assignments of the variables in . Therefore, the probability of being violated is .If
, that is, , then due to Lovasz local lemma (symmetric case, alternative form), it holds that .
The existence of satisfying assignment follows by the probabilistic method.
Moser's recursive fix algorithm
The above theorem basically says that for a CNF if every individual clause is easy to satisfy and is dependent with few other clauses then the CNF should be always satisfiable. However, the theorem only states the existence of a satisfying solution, but does not gives a way to find such solution.
In 2009, Moser gave a very simple randomized algorithm which efficiently finds a satisfying assignment with high probability under the condition
We need the following notations. Given as input a CNF formula
- Let
be the set of Boolean variables on which is defined. - For each clause
in , we denote by the set of variables on which is defined. - We also abuse the notation and denote by
the neighborhood of , i.e. the set of other clauses in that shares variables with , and the inclusive neighborhood of , i.e. the set of all clauses, including itself, that share variables with .
The algorithm consists of two functions: the main function Solve() and a recursive sub-routine Fix().
Solve(CNF )- Pick values of
uniformly and independently at random; - while there is an unsatisfied clause
in- Fix(
);
- Fix(
- Pick values of
The sub-routine Fix() is a recursive procedure:
Fix(Clause )- Replace the values of variables in
with uniform and independent random values; - while there is unsatisfied clause
- Fix(
);
- Fix(
- Replace the values of variables in
It is quite amazing to see that this simple algorithm works very well.
Theorem - Let
be a -CNF formula with maximum degree at most . - There is a universal constant
, such that if then the algorithm Solve( ) finds a satisfying assignment for in time with high probability.
- Let
Analysis of Moser's algorithm by entropy compression
Constructive Proof of General LLL
is a set of mutually independent random variables. is a set of events defined on variables in , where each is a predicate defined on a subset of random variables in .- For each
, denote by the set of variables on which is defined. - For each
, the neighborhood of , denoted by , is defined as
;
- and let
be the inclusive neighborhood of .
- and let
We still interpret the events in
The Moser-Tardos random solver
We make the following assumptions:
- It is efficient to draw independent samples for every random variable
according to its distribution. - It is efficient to evaluate whether
occurs on an evaluation of random variables in .
Moser-Tardos Algorithm - Sample all
independently; - while there is a bad event
that occurs- resample all
;
- resample all
- Sample all
Theorem (Moser-Tardos 2010) - If there is an
such that for every ,
- then the Moser-Tardos algorithm returns an evaluation of random variables in
satisfying using at most resamples in expectation.
- If there is an