组合数学 (Spring 2015)/The probabilistic method and 随机算法 (Fall 2015)/Lovász Local Lemma: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Etone
 
Line 1: Line 1:
== The Probabilistic Method ==
= Lovász Local Lemma=
The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.
Suppose that we are give a set of "bad" events <math>A_1,A_2,\ldots,A_n</math>. We want to know that it is possible that none of them occurs, that is:
 
The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:
*If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
:For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
*Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
:For example, if we know the average height of the students in the class is <math>\ell</math>, then we know there is a students whose height is at least <math>\ell</math>, and there is a student whose height is at most <math>\ell</math>.
 
Although the idea of  the probabilistic method is simple, it provides us a powerful tool for existential proof.
 
===Ramsey number===
 
Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of <math>K_6</math> (the complete graph on six vertices), there must be a '''monochromatic''' <math>K_3</math> (a triangle whose edges have the same color).
 
Generally, the '''Ramsey number''' <math>R(k,\ell)</math> is the smallest integer <math>n</math> such that in any two-coloring of the edges of a complete graph on <math>n</math> vertices <math>K_n</math> by red and blue, either there is a red <math>K_k</math> or there is a blue <math>K_\ell</math>.
 
Ramsey showed in 1929 that <math>R(k,\ell)</math> is finite for any <math>k</math> and <math>\ell</math>. It is extremely hard to compute the exact value of <math>R(k,\ell)</math>. Here we give a lower bound of <math>R(k,k)</math> by the probabilistic method.
 
{{Theorem
|Theorem (Erdős 1947)|
:If <math>{n\choose k}\cdot 2^{1-{k\choose 2}}<1</math> then it is possible to color the edges of <math>K_n</math> with two colors so that there is no monochromatic <math>K_k</math> subgraph.
}}
{{Proof| Consider a random two-coloring of edges of <math>K_n</math> obtained as follows:
* For each edge of <math>K_n</math>, independently flip a fair coin to decide the color of the edge.
 
For any fixed set <math>S</math> of <math>k</math> vertices, let <math>\mathcal{E}_S</math> be the event that the <math>K_k</math> subgraph induced by <math>S</math> is monochromatic. There are <math>{k\choose 2}</math> many edges in <math>K_k</math>, therefore
:<math>\Pr[\mathcal{E}_S]=2\cdot 2^{-{k\choose 2}}=2^{1-{k\choose 2}}.</math>
 
Since there are <math>{n\choose k}</math> possible choices of <math>S</math>, by the union bound
:<math>
\Pr[\exists S, \mathcal{E}_S]\le {n\choose k}\cdot\Pr[\mathcal{E}_S]={n\choose k}\cdot 2^{1-{k\choose 2}}.
</math>
Due to the assumption, <math>{n\choose k}\cdot 2^{1-{k\choose 2}}<1</math>, thus there exists a two coloring that none of <math>\mathcal{E}_S</math> occurs, which means  there is no monochromatic <math>K_k</math> subgraph.
}}
 
For <math>k\ge 3</math> and we take <math>n=\lfloor2^{k/2}\rfloor</math>, then
:<math>
\begin{align}
{n\choose k}\cdot 2^{1-{k\choose 2}}
&<
\frac{n^k}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\
&\le
\frac{2^{k^2/2}}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\
&=
\frac{2^{1+\frac{k}{2}}}{k!}\\
&<1.
\end{align}
</math>
By the above theorem, there exists a two-coloring of <math>K_n</math> that there is no monochromatic <math>K_k</math>. Therefore, the Ramsey number <math>R(k,k)>\lfloor2^{k/2}\rfloor</math> for all <math>k\ge 3</math>.
 
===Tournament===
A '''[http://en.wikipedia.org/wiki/Tournament_(graph_theory) tournament]''' (竞赛图) on a set <math>V</math> of <math>n</math> players is an '''orientation''' of the edges of the complete graph on the set of vertices <math>V</math>. Thus for every two distinct vertices <math>u,v</math> in <math>V</math>, either <math>(u,v)\in E</math> or <math>(v,u)\in E</math>, but not both.
 
We can think of the set <math>V</math> as a set of <math>n</math> players in which each pair participates in a single match, where <math>(u,v)</math> is in the tournament iff player <math>u</math> beats player <math>v</math>.
 
{{Theorem|Definition|
:We say that a tournament has '''<math>k</math>-paradoxical''' if for every set of <math>k</math> players there is a player who beats them all.
}}
 
Is it true for every finite <math>k</math>, there is a <math>k</math>-paradoxical tournament (on more than <math>k</math> vertices, of course)? This problem was first raised by Schütte, and as shown by Erdős, can be solved almost trivially by the probabilistic method.
 
{{Theorem|Theorem (Erdős 1963)|
:If <math>{n\choose k}\left(1-2^{-k}\right)^{n-k}<1</math> then there is a tournament on <math>n</math> vertices that is <math>k</math>-paradoxical.
}}
{{Proof|
Consider a uniformly random tournament <math>T</math> on the set <math>V=[n]</math>. For every fixed subset <math>S\in{V\choose k}</math> of <math>k</math> vertices, let <math>A_S</math> be the event defined as follows
:<math>A_S:\,</math> there is no vertex in <math>V\setminus S</math> that beats all vertices in <math>S</math>.
 
In a uniform random tournament, the orientations of edges are independent. For any <math>u\in V\setminus S</math>,
:<math>\Pr[u\mbox{ beats all }v\in S]=2^{-k}</math>.
Therefore, <math>\Pr[u\mbox{ does not beats all }v\in S]=1-2^{-k}</math> and
:<math>\Pr[A_S]=\prod_{u\in V\setminus S}\Pr[u\mbox{ does not beats all }v\in S]=(1-2^{-k})^{n-k}</math>.
 
It follows that
:<math>\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\le \sum_{S\in{V\choose k}}\Pr[A_S]={n\choose k}(1-2^{-k})^{n-k}<1.</math>
Therefore,
:<math>\Pr[\,T\mbox{ is }k\mbox{-paradoxical }]=\Pr\left[\bigwedge_{S\in{V\choose k}}\overline{A_S}\right]=1-\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]>0.</math>
There is a <math>k</math>-paradoxical tournament.
}}
 
=== Linearity of expectation ===
Let <math>X</math> be a discrete '''random variable'''.  The expectation of <math>X</math> is defined as follows.
{{Theorem
|Definition (Expectation)|
:The '''expectation''' of a discrete random variable <math>X</math>, denoted by <math>\mathbf{E}[X]</math>, is given by
::<math>\begin{align}
\mathbf{E}[X] &= \sum_{x}x\Pr[X=x],
\end{align}</math>
:where the summation is over all values <math>x</math> in the range of <math>X</math>.
}}
 
A fundamental fact regarding the expectation is its '''linearity'''.
 
{{Theorem
|Theorem (Linearity of Expectations)|
:For any discrete random variables <math>X_1, X_2, \ldots, X_n</math>, and any real constants <math>a_1, a_2, \ldots, a_n</math>,
::<math>\begin{align}
\mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i].
\end{align}</math>
}}
 
;Hamiltonian paths
The following result of Szele in 1943 is often considered the first use of the probabilistic method.
{{Theorem|Theorem (Szele 1943)|
:There is a tournament on <math>n</math> players with at least <math>n!2^{-(n-1)}</math> Hamiltonian paths.
}}
{{Proof|
Consider the uniform random tournament <math>T</math> on <math>[n]</math>. For any permutation <math>\pi</math> of <math>[n]</math>, let <math>X_{\pi}</math> be the indicator random variable defined as
:<math>X_{\pi}=\begin{cases}
1 & \forall i\in[n-1], (\pi_i,\pi_{i+1})\in T,\\
0 & \mbox{otherwise}.
\end{cases}</math>
In other words, <math>X_{\pi}</math> indicates whether <math>\pi_0\rightarrow\pi_1\rightarrow\pi_2\rightarrow\cdots\rightarrow\pi_{n-1}</math> gives a Hamiltonian path.
It holds that
:<math>\mathrm{E}[X_\pi]=1\cdot\Pr[X_\pi=1]+0\cdot\Pr[X_\pi=0]=\Pr[\forall i\in[n-1], (\pi_i,\pi_{i+1})\in T]=2^{-(n-1)}.</math>
 
Let <math>X=\sum_{\pi:\text{permutation of }[n]}X_\pi\,</math>. Clearly <math>X</math> is the number of Hamiltonian paths in the tournament <math>T</math>.
Due to the linearity of expectation,
:<math>\mathrm{E}[X]=\mathrm{E}\left[\sum_{\pi:\text{permutation of }[n]}X_\pi\right]=\sum_{\pi:\text{permutation of }[n]}\mathrm{E}[X_\pi]=n!2^{-(n-1)}.</math>
This is the average number of Hamiltonian paths in a tournament, where the average is taken over all tournaments.
Thus some tournament has at least <math>n!2^{-(n-1)}</math> Hamiltonian paths.
}}
 
===Independent sets===
An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.
{{Theorem
|Theorem|
:Let <math>G(V,E)</math> be a graph on <math>n</math> vertices with <math>m</math> edges. Then <math>G</math> has an independent set with at least <math>\frac{n^2}{4m}</math> vertices.
}}
{{Proof| Let <math>S</math> be a set of vertices constructed as follows:
:For each vertex <math>v\in V</math>:
:* <math>v</math> is included in <math>S</math> independently with probability <math>p</math>,
<math>p</math> to be determined.
 
Let <math>X=|S|</math>. It is obvious that <math>\mathbf{E}[X]=np</math>.
 
For each edge <math>e\in E</math>, let <math>Y_{e}</math> be the random variable which indicates whether both endpoints of <math>e=uv</math> are in <math>S</math>.
:<math>
\mathbf{E}[Y_{uv}]=\Pr[u\in S\wedge v\in S]=p^2.
</math>
Let <math>Y</math> be the number of edges in the subgraph of <math>G</math> induced by <math>S</math>. It holds that <math>Y=\sum_{e\in E}Y_e</math>. By linearity of expectation,
:<math>\mathbf{E}[Y]=\sum_{e\in E}\mathbf{E}[Y_e]=mp^2</math>.
 
Note that although <math>S</math> is not necessary an independent set, it can be modified to one if for each edge <math>e</math> of the induced subgraph <math>G(S)</math>, we delete one of the endpoint of <math>e</math> from <math>S</math>. Let <math>S^*</math> be the resulting set. It is obvious that <math>S^*</math> is an independent set since there is no edge left in the induced subgraph <math>G(S^*)</math>.
 
Since there are <math>Y</math> edges in <math>G(S)</math>, there are at most <math>Y</math> vertices in <math>S</math> are deleted to make it become <math>S^*</math>. Therefore, <math>|S^*|\ge X-Y</math>. By linearity of expectation,
:<math>
\mathbf{E}[|S^*|]\ge\mathbf{E}[X-Y]=\mathbf{E}[X]-\mathbf{E}[Y]=np-mp^2.
</math>
The expectation is maximized when <math>p=\frac{n}{2m}</math>, thus
:<math>
\mathbf{E}[|S^*|]\ge n\cdot\frac{n}{2m}-m\left(\frac{n}{2m}\right)^2=\frac{n^2}{4m}.
</math>
There exists an independent set which contains at least <math>\frac{n^2}{4m}</math> vertices.
}}
 
=== Coloring large-girth graphs ===
The girth of a graph is the length of the shortest cycle of the graph.
{{Theorem|Definition|
Let <math>G(V,E)</math> be an undirected graph.
* A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>.
* The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>.
}}
 
The chromatic number of a graph is the minimum number of colors with which the graph can be ''properly'' colored.
{{Theorem|Definition (chromatic number)|
* The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally,
::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>.
}}
 
In 1959, Erdős proved the following theorem: for any fixed <math>k</math> and <math>\ell</math>, there exists a finite graph with girth at least <math>k</math> and chromatic number at least <math>\ell</math>. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.
 
{{Theorem| Theorem (Erdős 1959)|
: For all <math>k,\ell</math> there exists a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k\,</math>.
}}
 
It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.
 
{{Theorem|Definition (independence number)|
* The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally,
::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>.
}}
 
We observe the following relationship between the chromatic number and the independence number.
{{Theorem|Lemma|
:For any <math>n</math>-vertex graph,
::<math>\chi(G)\ge\frac{n}{\alpha(G)}</math>.
}}
{{Proof|
*In the optimal coloring, <math>n</math> vertices are partitioned into <math>\chi(G)</math> color classes according to the vertex color.
*Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
*By the pigeonhole principle, there is a color class (hence an independent set) of size <math>\frac{n}{\chi(G)}</math>. Therefore, <math>\alpha(G)\ge\frac{n}{\chi(G)}</math>.
 
The lemma follows.
}}
 
Therefore, it is sufficient to prove that <math>\alpha(G)\le\frac{n}{k}</math> and <math>g(G)>\ell</math>.
 
{{Prooftitle|Proof of Erdős theorem|
Fix <math>\theta<\frac{1}{\ell}</math>. Let <math>G</math> be <math>G(n,p)</math> with <math>p=n^{\theta-1}</math>.
 
For any length-<math>i</math> simple cycle <math>\sigma</math>, let <math>X_\sigma</math> be the indicator random variable such that
:<math>
X_\sigma=
\begin{cases}
1 & \sigma\mbox{ is a cycle in }G,\\
0 & \mbox{otherwise}.
\end{cases}
</math>
 
The number of cycles of length at most <math>\ell</math> in graph <math>G</math> is
:<math>X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma</math>.
 
For any particular length-<math>i</math> simple cycle <math>\sigma</math>,
:<math>\mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i}</math>.
For any <math>3\le i\le n</math>, the number of length-<math>i</math> simple cycle is <math>\frac{n(n-1)\cdots (n-i+1)}{2i}</math>. By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i}=o(n)</math>.
Applying Markov's inequality,
:<math>
\Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1).
</math>
Therefore, with high probability the random graph has less than <math>n/2</math> short cycles.
 
Now we proceed to analyze the independence number. Let <math>m=\left\lceil\frac{3\ln n}{p}\right\rceil</math>, so that
:<math>
\begin{align}
\Pr[\alpha(G)\ge m]
&\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\
&\le{n\choose m}(1-p)^{m\choose 2}\\
&<n^m\mathrm{e}^{-p{m\choose 2}}\\
&=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1)
\end{align}
</math>
The probability that either of the above events occurs is
:<math>
\begin{align}
\Pr\left[X\ge\frac{n}{2}\vee \alpha(G)\ge m\right]
\le \Pr\left[X\ge \frac{n}{2}\right]+\Pr\left[\alpha(G)\ge m\right]
=o(1).
\end{align}
</math>
Therefore, there exists a graph <math>G</math> with less than <math>n/2</math> "short" cycles, i.e., cycles of length at most <math>\ell</math>, and with <math>\alpha(G)<m\le 3n^{1-\theta}\ln n</math>.
 
Take each "short" cycle in <math>G</math> and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph <math>G'</math> which has no short cycles, hence the girth <math>g(G')\ge\ell</math>. And <math>G'</math> has at least <math>n/2</math> vertices, because at most <math>n/2</math> vertices are removed.
 
Notice that removing vertices cannot makes <math>\alpha(G)</math> grow. It holds that <math>\alpha(G')\le\alpha(G)</math>. Thus
:<math>\chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n}</math>.
The theorem is proved by taking <math>n</math> sufficiently large so that this value is greater than <math>k</math>.
}}
 
The proof contains a very simple procedure which for any <math>k</math> and <math>\ell</math> ''generates'' such a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k</math>. The procedure is as such:
* Fix some <math>\theta<\frac{1}{\ell}</math>. Choose sufficiently large <math>n</math> with <math>\frac{n^\theta}{6\ln n}>k</math>, and let <math>p=n^{\theta-1}</math>.
* Generate a random graph <math>G</math> as <math>G(n,p)</math>.
* For each cycle of length at most <math>\ell</math> in <math>G</math>, remove a vertex from the cycle.
The resulting graph <math>G'</math> satisfying that <math>g(G)>\ell</math> and <math>\chi(G)>k</math> with high probability.
 
== Lovász Local Lemma==
Consider a set of "bad" events <math>A_1,A_2,\ldots,A_n</math>. Suppose that <math>\Pr[A_i]\le p</math> for all <math>1\le i\le n</math>. We want to show that there is a situation that none of the bad events occurs. Due to the probabilistic method, we need to prove that
:<math>
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0.
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0.
</math>
</math>
;Case 1<nowiki>: mutually independent events.</nowiki>
Obviously, a ''necessary'' condition for this is that for none of the bad events its occurrence is certain, i.e. <math>\Pr[A_i]<1</math> for all <math>i</math>. We are interested in the ''sufficient'' condition for the above. There are two easy cases:
If all the bad events <math>A_1,A_2,\ldots,A_n</math> are mutually independent, then
;Case 1<nowiki>: mutual independence.</nowiki>
If all the bad events <math>A_1,A_2,\ldots,A_m</math> are mutually independent, then
:<math>
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge(1-p)^n>0,
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]=\prod_{i=1}^m(1-\Pr[A_i])
</math>
</math>
for any <math>p<1</math>.
and hence this probability is positive if <math>\Pr[A_i]<1</math> for all <math>i</math>.


;Case 2<nowiki>: arbitrarily dependent events.</nowiki>
;Case 2<nowiki>: arbitrary dependency.</nowiki>
On the other hand, if we put no assumption on the dependencies between the events, then by the union bound (which holds unconditionally),
On the other extreme, if we know nothing about the dependencies between these bad event, the best we can do is to apply the union bound:
:<math>
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=1-\Pr\left[\bigvee_{i=1}^n A_i\right]\ge 1-np,
\Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\ge 1-\sum_{i=1}^m\Pr\left[A_i\right],
</math>
</math>
which is not an interesting bound for <math>p\ge\frac{1}{n}</math>. We cannot improve bound without further information regarding the dependencies between the events.
which is positive if <math>\sum_{i=1}^m\Pr\left[A_i\right]<1</math>. This is a very loose bound, however it cannot be further improved if no further information regarding the dependencies between the events is assumed.


----
----


We would like to know what is going on between the two extreme cases: mutually independent events, and arbitrarily dependent events. The Lovász local lemma provides such a tool.
In most situations, the dependencies between events are somewhere between these two extremal cases: the events are not independent of each other, but on the other hand the dependencies between them are not total out of control. For these more general cases, we would like to exploit the tradeoff between probabilities of bad events and dependencies between them.  


The local lemma is powerful tool for showing the possibility of rare event under ''limited dependencies''. The structure of dependencies between a set of events is described by a '''dependency graph'''.
The Lovász local lemma is such a powerful tool for showing the possibility of rare event under ''limited dependencies''. The structure of dependencies between a set of events is described by a '''dependency graph'''.


{{Theorem
{{Theorem
|Definition|
|Definition|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events. A graph <math>D=(V,E)</math> on the set of vertices <math>V=\{1,2,\ldots,n\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_n</math> if for each <math>i</math>, <math>1\le i\le n</math>, the event <math>A_i</math> is mutually independent of all the events <math>\{A_j\mid (i,j)\not\in E\}</math>.
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events. A graph <math>D=(V,E)</math> with set of vertices <math>V=\{A_1,A_2,\ldots,A_m\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_m</math> if for each <math>i</math>, the event <math>A_i</math> is mutually independent of all the events in <math>\{A_j\mid (A_i,A_j)\not\in E\}</math>.
}}
}}
The maximum degree <math>d</math> of the dependency graph <math>D</math> is a very useful information, as it tells us that every event <math>A_i</math> among  <math>A_1,A_2,\ldots,A_m</math> is dependent with how many other events at most.
;Remark on the mutual independence
:In probability theory, an event <math>A</math> is said to be independent of events <math>B_1,B_2,\ldots,B_k</math> if for any disjoint <math>I^+,I^-\subseteq\{1,2,\ldots,k\}</math>, we have
:::<math>\Pr\left[A\mid \bigwedge_{i\in I^+}B_i,\bigwedge_{i\in I^-}\overline{B}_i \right]=\Pr[A]</math>,
:that is, occurrences of events among <math>B_1,B_2,\ldots,B_k</math> have no influence on the occurrence of <math>A</math>.


;Example
;Example
:Let <math>X_1,X_2,\ldots,X_m</math> be a set of ''mutually independent'' random variables. Each event <math>A_i</math> is a predicate defined on a number of variables among <math>X_1,X_2,\ldots,X_m</math>. Let <math>v(A_i)</math> be the unique smallest set of variables which determine <math>A_i</math>. The dependency graph <math>D=(V,E)</math> is defined by
:Let <math>X_1,X_2,\ldots,X_n</math> be a set of ''mutually independent'' random variables. Each event <math>A_i</math> is a predicate defined on a number of variables among <math>X_1,X_2,\ldots,X_n</math>. Let <math>\mathsf{vbl}(A_i)</math> be the unique smallest set of variables which determine <math>A_i</math>. The dependency graph <math>D=(V,E)</math> is defined as that any two events <math>A_i,A_j</math> are adjacent in <math>D</math> if and only if they share variables, i.e. <math>\mathsf{vbl}(A_i)\cap\mathsf{vbl}(A_j)\neq\emptyset</math>.
:::<math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.


The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.
The following theorem was proved by Erdős and Lovász in 1975 and then later improved by Lovász in 1977. Now it is commonly referred as the '''Lovász local lemma'''. It is a very powerful tool, especially when being used with the probabilistic method, as it supplies a way for dealing with rare events.


{{Theorem
{{Theorem
|Lovász Local Lemma (symmetric case)|
|Lovász Local Lemma (symmetric case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
:Let <math>A_1,A_2,\ldots,A_m</math> be a set of events, and assume that the followings hold:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#for all <math>1\le i\le m</math>, <math>\Pr[A_i]\le p</math>;
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and  
:#every event <math>A_i</math> is mutually independent of all other events except at most <math>d</math> of them, and
:::<math>ep(d+1)\le 1</math>.
:::<math>\mathrm{e}p(d+1)\le 1</math>.
:Then
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
}}
Here <math>d</math> is the maximum degree of the dependency graph <math>D</math> for the events <math>A_1,\ldots,A_m</math>.


Intuitively, the Lovász Local Lemma says that if a rare (but hopefully possible) event is formulated as to avoid a series of bad events simultaneously, then the rare event is indeed possible if:
* none of these bad events is too probable;
* none of these bad events is dependent with too many other bad events;
And the tradeoff between "too probable" and "too many" is precisely captured by the <math>\mathrm{e}p(d+1)\le 1</math> condition.
==Non-constructive Poof of LLL==
We will prove a general version of the local lemma, where the events <math>A_i</math> are not symmetric. This generalization is due to Spencer.
We will prove a general version of the local lemma, where the events <math>A_i</math> are not symmetric. This generalization is due to Spencer.
{{Theorem
{{Theorem
Line 310: Line 66:
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
}}
}}
This generalized version of the local lemma immediately implies the symmetric version of the lemma: namely, <math>\Pr\left[\bigwedge_{i}\overline{A_i}\right]>0</math> if <math>\Pr[A_i]\le p</math> for all <math>A_i</math> and <math>\mathrm{e}p(d+1)\le 1</math> where <math>d</math> is the maximum degree of the dependency graph.
To see this,  let <math>x_i=\frac{1}{d+1}</math> for all <math>i=1,2,\ldots,n</math>. Note that <math>\left(1-\frac{1}{d+1}\right)^d>\frac{1}{\mathrm{e}}</math>.
If the following conditions are satisfied:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#<math>ep(d+1)\le 1</math>;
then for all <math>1\le i\le n</math>,
:<math>\Pr[A_i]\le p\le\frac{1}{e(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
Due to the local lemma for general cases, this implies that
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>.
This proves the symmetric version of local lemma.
We then give the proof of the generalized Lovász Local Lemma. The proof is non-constructive and is by induction.
{{Proof|
{{Proof|
We can use the following probability identity to compute the probability of the intersection of events:
We can use the following probability identity to compute the probability of the intersection of events:
Line 376: Line 145:
}}
}}


To prove the symmetric case. Let <math>x_i=\frac{1}{d+1}</math> for all <math>i=1,2,\ldots,n</math>. Note that <math>\left(1-\frac{1}{d+1}\right)^d>\frac{1}{\mathrm{e}}</math>.
= Algorithmic Lovász Local Lemma =
We consider a restrictive case.
 
Let <math>X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\}</math> be a set of ''mutually independent'' random variables which assume boolean values. Each event <math>A_i</math> is an AND of at most <math>k</math> literals (<math>X_i</math> or <math>\neg X_i</math>). Let <math>v(A_i)</math> be the set of the <math>k</math> variables that <math>A_i</math> depends on. The probability that none of the bad events occurs is
:<math>
\Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right].
</math>
In this particular model, the dependency graph <math>D=(V,E)</math> is defined as that <math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.
 
Observe that <math>\overline{A_i}</math> is a clause (OR of literals). Thus, <math>\bigwedge_{i=1}^n \overline{A_i}</math> is a '''<math>k</math>-CNF''', the CNF that each clause depends on <math>k</math> variables.
The probability
:<math>
\bigwedge_{i=1}^n \overline{A_i}>0
</math>
means that the the <math>k</math>-CNF <math>\bigwedge_{i=1}^n \overline{A_i}</math> is satisfiable.
 
The satisfiability of <math>k</math>-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first '''NP-complete''' problem (the Cook-Levin theorem). Given the current suspect on '''NP''' vs '''P''', we do not expect to solve this problem generally.
 
However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most <math>d</math> other clauses. We call a <math>k</math>-CNF with this property a <math>k</math>-CNF with bounded degree <math>d</math>.
 
Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:
;Problem
:Find a condition on <math>k</math> and <math>d</math>, such that any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable.


If the following conditions are satisfied:
In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
 
:#<math>ep(d+1)\le 1</math>;
Let <math>\phi</math> be a <math>k</math>-CNF of <math>n</math> clauses with bounded degree <math>d</math>, defined on variables <math>X_1,\ldots,X_m</math>. The following procedure find a satisfiable assignment for <math>\phi</math>.
then for all <math>1\le i\le n</math>,
:<math>\Pr[A_i]\le p\le\frac{1}{e(d+1)}<\frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
Due to the local lemma for general cases, this implies that
:<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n>0</math>.
This gives the symmetric version of local lemma.


=== Ramsey number, revisited ===
{{Theorem
{{Theorem|Ramsey number|
|Solve(<math>\phi</math>)|
:Let <math>k,\ell</math> be positive integers. The Ramsey number <math>R(k,\ell)</math> is defined as the smallest integer satisfying:
:Pick a random assignment of <math>X_1,\ldots,X_m</math>.
:If <math>n\ge R(k,\ell)</math>, for any coloring of edges of <math>K_n</math> with two colors red and blue, there exists a red <math>K_k</math> or a blue <math>K_\ell</math>.
:While there is an unsatisfied clause <math>C</math> in <math>\phi</math>
:: '''Fix'''(<math>C</math>).
}}
}}


The Ramsey theorem says that for any <math>k,\ell</math>, <math>R(k,\ell)</math> is finite. The actual value of <math>R(k,\ell)</math> is extremely difficult to compute.
The sub-routine '''Fix''' is defined as follows:
We can use the local lemma to prove a lower bound for the diagonal Ramsey number.
{{Theorem
{{Theorem|Theorem|
|Fix(<math>C</math>)|
:<math>R(k,k)\ge Ck2^{k/2}</math> for some constant <math>C>0</math>.
:Replace the variables in <math>v(C)</math> with new random values.
:While there is unsatisfied clause <math>D</math> that <math>v(C)\cap v(D)\neq \emptyset</math>
:: '''Fix'''(<math>D</math>).
}}
}}
{{Proof|
To prove a lower bound <math>R(k,k)>n</math>, it is sufficient to show that there exists a 2-coloring of <math>K_n</math> without a monochromatic <math>K_k</math>. We prove this by the probabilistic method.


Pick a random 2-coloring of <math>K_n</math> by coloring each edge uniformly and independently with one of the two colors. For any set <math>S</math> of <math>k</math> vertices, let <math>A_S</math> denote the event that <math>S</math> forms a monochromatic <math>K_k</math>. It is easy to see that <math>\Pr[A_s]=2^{1-{k\choose 2}}=p</math>.
The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.
 
We then prove it works.
 
===Number of top-level callings of Fix ===
In '''Solve'''(<math>\phi</math>), the subroutine '''Fix'''(<math>C</math>) is called. We now upper bound the number of times it is called (not including the recursive calls).
 
Assume '''Fix'''(<math>C</math>) always terminates.  
:;Observation
::Every clause that was satisfied before '''Fix'''(<math>C</math>) was called will still remain satisfied and <math>C</math> will also be satisfied after '''Fix'''(<math>C</math>) returns.
 
The observation can be proved by induction on the structure of recursion.  Since there are <math>n</math> clauses, '''Solve'''(<math>\phi</math>) makes at most <math>n</math> calls to '''Fix'''.
 
We then prove that '''Fix'''(<math>C</math>) terminates.
 
=== Termination of Fix ===
The idea of the proof is to '''reconstruct''' a random string.
 
Suppose that during the running of '''Solve'''(<math>\phi</math>), the '''Fix''' subroutine is called for <math>t</math> times (including all the recursive calls).
 
Let <math>s</math> be the sequence of the random bits used by '''Solve'''(<math>\phi</math>). It is easy to see that the length of <math>s</math> is <math>|s|=m+tk</math>, because the initial random assignment of <math>m</math> variables takes <math>m</math> bits, and each time of calling '''Fix''' takes <math>k</math> bits.
 
We then reconstruct <math>s</math> in an alternative way.
 
Recall that '''Solve'''(<math>\phi</math>) calls '''Fix'''(<math>C</math>) at top-level for at most <math>n</math> times. Each calling of '''Fix'''(<math>C</math>) defines a recursion tree, rooted at clause <math>C</math>, and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of '''Solve'''(<math>\phi</math>) can be described by at most <math>n</math> recursion trees.
 
:;Observation 1
::Fix a <math>\phi</math>. The <math>n</math> recursion trees which capture the total running history of '''Solve'''(<math>\phi</math>) can be encoded in <math>n\log n+t(\log d+O(1))</math> bits.
Each root node corresponds to a clause. There are <math>n</math> clauses in <math>\phi</math>. The <math>n</math> root nodes can be represented in <math>n\log n</math> bits.
 
The smart part is how to encode the branches of the tree. Note that '''Fix'''(<math>C</math>) will call '''Fix'''(<math>D</math>) only for the <math>D</math> that shares variables with <math>C</math>. For a k-CNF with bounded degree <math>d</math>, each clause <math>C</math> can share variables with at most <math>d</math> many other clauses. Thus, each branch in the recursion tree can be represented  in <math>\log d</math> bits. There are extra <math>O(1)</math> bits needed to denote whether the recursion ends. So totally  <math>n\log n+t(\log d+O(1))</math> bits are sufficient to encode all <math>n</math> recursion trees.
 
:;Observation 2
::The random sequence <math>s</math> can be encoded in <math>m+n\log n+t(\log d+O(1))</math> bits.
 
With <math>n\log n+t(\log d+O(1))</math> bits, the structure of all the recursion trees can be encoded. With extra <math>m</math> bits, the final assignment of the <math>m</math>
variables is stored.
 
We then observe that with these information, the sequence of the random bits <math>s</math> can be reconstructed from backwards from the final assignment.  


For any <math>k</math>-subset <math>T</math> of vertices, <math>A_S</math> and <math>A_T</math> are dependent if and only if <math>|S\cap T|\ge 2</math>. For each <math>S</math>, the number of <math>T</math> that <math>|S\cap T|\ge 2</math> is at most <math>{k\choose 2}{n\choose k-2}</math>, so the max degree of the dependency graph is <math>d\le{k\choose 2}{n\choose k-2}</math>.
The key step is that a clause <math>C</math> is only fixed when it is unsatisfied (obvious), and an unsatisfied clause <math>C</math> must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the <math>k</math> random bits in the random sequence <math>s</math> used in the call of the Fix corresponding to the node. Therefore, <math>s</math> can be reconstructed from the final assignment plus at most <math>n</math> recursion trees, which can be encoded in at most <math>m+n\log n+t(\log d+O(1))</math> bits.


Take <math>n=Ck2^{k/2}</math> for some appropriate constant <math>C>0</math>.
The following theorem lies in the heart of the '''Kolmogorov complexity'''. The theorem states that random sequence is '''incompressible'''.
{{Theorem
|Theorem (Kolmogorov)|
:For any encoding scheme , with high probability, a random sequence <math>s</math> is encoded in at least <math>|s|</math> bits.
}}
 
Applying the theorem, we have that with high probability,
:<math>m+n\log n+t(\log d+O(1))\ge |s|=m+tk</math>.
Therefore,
:<math>
:<math>
\begin{align}
t(k-O(1)-\log d)\le n\log n.
\mathrm{e}p(d+1)
&\le \mathrm{e}2^{1-{k\choose 2}}\left({k\choose 2}{n\choose k-2}+1\right)\\
&\le 2^{3-{k\choose 2}}{k\choose 2}{n\choose k-2}\\
&\le 1
\end{align}
</math>
</math>
Applying the local lemma, the probability that there is no monochromatic <math>K_k</math> is
In order to bound <math>t</math>, we need
:<math>\Pr\left[\bigwedge_{S\in{[n]\choose k}}\overline{A_S}\right]>0</math>.
:<math>k-O(1)-\log d>0</math>,
Therefore, there exists a 2-coloring of <math>K_n</math> which has no monochromatic <math>K_k</math>, which means
which hold for <math>d< 2^{k-\alpha}</math> for some constant <math>\alpha>0</math>. In fact, in this case, <math>t=O(n\log n)</math>, the running time of the procedure is bounded by a polynomial!
:<math>R(k,k)>n=Ck2^{k/2}</math>.
 
=== Back to the local lemma ===
We showed that for <math>d<2^{k-O(1)}</math>, any <math>k</math>-CNF with bounded degree <math>d</math> is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.
 
Recall that the symmetric version of the local lemma:
{{Theorem
|Theorem (The local lemma: symmetric case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and
:::<math>ep(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}
}}
Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens <math>\overline{A_i}</math> are clauses defined on <math>k</math> variables. Then,
:<math>
p=2^{-k}
</math>
thus, the condition <math>ep(d+1)\le 1</math> means that
:<math>
d<2^{k}/e
</math>
which means the Moser's procedure is asymptotically optimal on the degree of dependency.

Revision as of 09:16, 28 November 2015

Lovász Local Lemma

Suppose that we are give a set of "bad" events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math]. We want to know that it is possible that none of them occurs, that is:

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0. }[/math]

Obviously, a necessary condition for this is that for none of the bad events its occurrence is certain, i.e. [math]\displaystyle{ \Pr[A_i]\lt 1 }[/math] for all [math]\displaystyle{ i }[/math]. We are interested in the sufficient condition for the above. There are two easy cases:

Case 1: mutual independence.

If all the bad events [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] are mutually independent, then

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]=\prod_{i=1}^m(1-\Pr[A_i]) }[/math]

and hence this probability is positive if [math]\displaystyle{ \Pr[A_i]\lt 1 }[/math] for all [math]\displaystyle{ i }[/math].

Case 2: arbitrary dependency.

On the other extreme, if we know nothing about the dependencies between these bad event, the best we can do is to apply the union bound:

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^m\overline{A_i}\right]\ge 1-\sum_{i=1}^m\Pr\left[A_i\right], }[/math]

which is positive if [math]\displaystyle{ \sum_{i=1}^m\Pr\left[A_i\right]\lt 1 }[/math]. This is a very loose bound, however it cannot be further improved if no further information regarding the dependencies between the events is assumed.


In most situations, the dependencies between events are somewhere between these two extremal cases: the events are not independent of each other, but on the other hand the dependencies between them are not total out of control. For these more general cases, we would like to exploit the tradeoff between probabilities of bad events and dependencies between them.

The Lovász local lemma is such a powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a dependency graph.

Definition
Let [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] be a set of events. A graph [math]\displaystyle{ D=(V,E) }[/math] with set of vertices [math]\displaystyle{ V=\{A_1,A_2,\ldots,A_m\} }[/math] is called a dependency graph for the events [math]\displaystyle{ A_1,\ldots,A_m }[/math] if for each [math]\displaystyle{ i }[/math], the event [math]\displaystyle{ A_i }[/math] is mutually independent of all the events in [math]\displaystyle{ \{A_j\mid (A_i,A_j)\not\in E\} }[/math].

The maximum degree [math]\displaystyle{ d }[/math] of the dependency graph [math]\displaystyle{ D }[/math] is a very useful information, as it tells us that every event [math]\displaystyle{ A_i }[/math] among [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] is dependent with how many other events at most.

Remark on the mutual independence
In probability theory, an event [math]\displaystyle{ A }[/math] is said to be independent of events [math]\displaystyle{ B_1,B_2,\ldots,B_k }[/math] if for any disjoint [math]\displaystyle{ I^+,I^-\subseteq\{1,2,\ldots,k\} }[/math], we have
[math]\displaystyle{ \Pr\left[A\mid \bigwedge_{i\in I^+}B_i,\bigwedge_{i\in I^-}\overline{B}_i \right]=\Pr[A] }[/math],
that is, occurrences of events among [math]\displaystyle{ B_1,B_2,\ldots,B_k }[/math] have no influence on the occurrence of [math]\displaystyle{ A }[/math].
Example
Let [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math] be a set of mutually independent random variables. Each event [math]\displaystyle{ A_i }[/math] is a predicate defined on a number of variables among [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math]. Let [math]\displaystyle{ \mathsf{vbl}(A_i) }[/math] be the unique smallest set of variables which determine [math]\displaystyle{ A_i }[/math]. The dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined as that any two events [math]\displaystyle{ A_i,A_j }[/math] are adjacent in [math]\displaystyle{ D }[/math] if and only if they share variables, i.e. [math]\displaystyle{ \mathsf{vbl}(A_i)\cap\mathsf{vbl}(A_j)\neq\emptyset }[/math].

The following theorem was proved by Erdős and Lovász in 1975 and then later improved by Lovász in 1977. Now it is commonly referred as the Lovász local lemma. It is a very powerful tool, especially when being used with the probabilistic method, as it supplies a way for dealing with rare events.

Lovász Local Lemma (symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_m }[/math] be a set of events, and assume that the followings hold:
  1. for all [math]\displaystyle{ 1\le i\le m }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. every event [math]\displaystyle{ A_i }[/math] is mutually independent of all other events except at most [math]\displaystyle{ d }[/math] of them, and
[math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

Here [math]\displaystyle{ d }[/math] is the maximum degree of the dependency graph [math]\displaystyle{ D }[/math] for the events [math]\displaystyle{ A_1,\ldots,A_m }[/math].

Intuitively, the Lovász Local Lemma says that if a rare (but hopefully possible) event is formulated as to avoid a series of bad events simultaneously, then the rare event is indeed possible if:

  • none of these bad events is too probable;
  • none of these bad events is dependent with too many other bad events;

And the tradeoff between "too probable" and "too many" is precisely captured by the [math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math] condition.

Non-constructive Poof of LLL

We will prove a general version of the local lemma, where the events [math]\displaystyle{ A_i }[/math] are not symmetric. This generalization is due to Spencer.

Lovász Local Lemma (general case)
Let [math]\displaystyle{ D=(V,E) }[/math] be the dependency graph of events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math]. Suppose there exist real numbers [math]\displaystyle{ x_1,x_2,\ldots, x_n }[/math] such that [math]\displaystyle{ 0\le x_i\lt 1 }[/math] and for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ \Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j) }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i) }[/math].

This generalized version of the local lemma immediately implies the symmetric version of the lemma: namely, [math]\displaystyle{ \Pr\left[\bigwedge_{i}\overline{A_i}\right]\gt 0 }[/math] if [math]\displaystyle{ \Pr[A_i]\le p }[/math] for all [math]\displaystyle{ A_i }[/math] and [math]\displaystyle{ \mathrm{e}p(d+1)\le 1 }[/math] where [math]\displaystyle{ d }[/math] is the maximum degree of the dependency graph. To see this, let [math]\displaystyle{ x_i=\frac{1}{d+1} }[/math] for all [math]\displaystyle{ i=1,2,\ldots,n }[/math]. Note that [math]\displaystyle{ \left(1-\frac{1}{d+1}\right)^d\gt \frac{1}{\mathrm{e}} }[/math].

If the following conditions are satisfied:

  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. [math]\displaystyle{ ep(d+1)\le 1 }[/math];

then for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \Pr[A_i]\le p\le\frac{1}{e(d+1)}\lt \frac{1}{d+1}\left(1-\frac{1}{d+1}\right)^d\le x_i\prod_{(i,j)\in E}(1-x_j) }[/math].

Due to the local lemma for general cases, this implies that

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)=\left(1-\frac{1}{d+1}\right)^n\gt 0 }[/math].

This proves the symmetric version of local lemma.

We then give the proof of the generalized Lovász Local Lemma. The proof is non-constructive and is by induction.

Proof.

We can use the following probability identity to compute the probability of the intersection of events:

Lemma 1
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right] }[/math].
Proof.

By definition of conditional probability,

[math]\displaystyle{ \Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right] =\frac{\Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]} {\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]} }[/math],

so we have

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_{i}}\right]=\Pr\left[\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right]\Pr\left[\overline{A_n}\mid\bigwedge_{i=1}^{n-1}\overline{A_{i}}\right] }[/math].

The lemma is proved by recursively applying this equation.

[math]\displaystyle{ \square }[/math]

Next we prove by induction on [math]\displaystyle{ m }[/math] that for any set of [math]\displaystyle{ m }[/math] events [math]\displaystyle{ i_1,\ldots,i_m }[/math],

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right]\le x_{i_1} }[/math].

The local lemma is a direct consequence of this by applying Lemma 1.

For [math]\displaystyle{ m=1 }[/math], this is obvious. For general [math]\displaystyle{ m }[/math], let [math]\displaystyle{ i_2,\ldots,i_k }[/math] be the set of vertices adjacent to [math]\displaystyle{ i_1 }[/math] in the dependency graph. Clearly [math]\displaystyle{ k-1\le d }[/math]. And it holds that

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right] =\frac{\Pr\left[ A_i\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]} {\Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]} }[/math],

which is due to the basic conditional probability identity

[math]\displaystyle{ \Pr[A\mid BC]=\frac{\Pr[AB\mid C]}{\Pr[B\mid C]} }[/math].

We bound the numerator

[math]\displaystyle{ \begin{align} \Pr\left[ A_{i_1}\wedge \bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right] &\le\Pr\left[ A_{i_1}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right]\\ &=\Pr[A_{i_1}]\\ &\le x_{i_1}\prod_{(i_1,j)\in E}(1-x_j). \end{align} }[/math]

The equation is due to the independence between [math]\displaystyle{ A_{i_1} }[/math] and [math]\displaystyle{ A_{i_k+1},\ldots,A_{i_m} }[/math].

The denominator can be expanded using Lemma 1 as

[math]\displaystyle{ \Pr\left[\bigwedge_{j=2}^k\overline{A_{i_j}}\mid \bigwedge_{j=k+1}^m\overline{A_{i_j}}\right] =\prod_{j=2}^k\Pr\left[\overline{A_{i_j}}\mid \bigwedge_{\ell=j+1}^m\overline{A_{i_\ell}}\right] }[/math]

which by the induction hypothesis, is at least

[math]\displaystyle{ \prod_{j=2}^k(1-x_{i_j})=\prod_{\{i_1,i_j\}\in E}(1-x_j) }[/math]

where [math]\displaystyle{ E }[/math] is the edge set of the dependency graph.

Therefore,

[math]\displaystyle{ \Pr\left[A_{i_1}\mid \bigwedge_{j=2}^m\overline{A_{i_j}}\right] \le\frac{x_{i_1}\prod_{(i_1,j)\in E}(1-x_j)}{\prod_{\{i_1,i_j\}\in E}(1-x_j)}\le x_{i_1}. }[/math]

Applying Lemma 1,

[math]\displaystyle{ \begin{align} \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right] &=\prod_{i=1}^n\Pr\left[\overline{A_i}\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\\ &=\prod_{i=1}^n\left(1-\Pr\left[A_i\mid \bigwedge_{j=1}^{i-1}\overline{A_{j}}\right]\right)\\ &\ge\prod_{i=1}^n\left(1-x_i\right). \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

Algorithmic Lovász Local Lemma

We consider a restrictive case.

Let [math]\displaystyle{ X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\} }[/math] be a set of mutually independent random variables which assume boolean values. Each event [math]\displaystyle{ A_i }[/math] is an AND of at most [math]\displaystyle{ k }[/math] literals ([math]\displaystyle{ X_i }[/math] or [math]\displaystyle{ \neg X_i }[/math]). Let [math]\displaystyle{ v(A_i) }[/math] be the set of the [math]\displaystyle{ k }[/math] variables that [math]\displaystyle{ A_i }[/math] depends on. The probability that none of the bad events occurs is

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right]. }[/math]

In this particular model, the dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined as that [math]\displaystyle{ (i,j)\in E }[/math] iff [math]\displaystyle{ v(A_i)\cap v(A_j)\neq \emptyset }[/math].

Observe that [math]\displaystyle{ \overline{A_i} }[/math] is a clause (OR of literals). Thus, [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is a [math]\displaystyle{ k }[/math]-CNF, the CNF that each clause depends on [math]\displaystyle{ k }[/math] variables. The probability

[math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i}\gt 0 }[/math]

means that the the [math]\displaystyle{ k }[/math]-CNF [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is satisfiable.

The satisfiability of [math]\displaystyle{ k }[/math]-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first NP-complete problem (the Cook-Levin theorem). Given the current suspect on NP vs P, we do not expect to solve this problem generally.

However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most [math]\displaystyle{ d }[/math] other clauses. We call a [math]\displaystyle{ k }[/math]-CNF with this property a [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math].

Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:

Problem
Find a condition on [math]\displaystyle{ k }[/math] and [math]\displaystyle{ d }[/math], such that any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable.

In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.

Let [math]\displaystyle{ \phi }[/math] be a [math]\displaystyle{ k }[/math]-CNF of [math]\displaystyle{ n }[/math] clauses with bounded degree [math]\displaystyle{ d }[/math], defined on variables [math]\displaystyle{ X_1,\ldots,X_m }[/math]. The following procedure find a satisfiable assignment for [math]\displaystyle{ \phi }[/math].

Solve([math]\displaystyle{ \phi }[/math])
Pick a random assignment of [math]\displaystyle{ X_1,\ldots,X_m }[/math].
While there is an unsatisfied clause [math]\displaystyle{ C }[/math] in [math]\displaystyle{ \phi }[/math]
Fix([math]\displaystyle{ C }[/math]).

The sub-routine Fix is defined as follows:

Fix([math]\displaystyle{ C }[/math])
Replace the variables in [math]\displaystyle{ v(C) }[/math] with new random values.
While there is unsatisfied clause [math]\displaystyle{ D }[/math] that [math]\displaystyle{ v(C)\cap v(D)\neq \emptyset }[/math]
Fix([math]\displaystyle{ D }[/math]).

The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.

We then prove it works.

Number of top-level callings of Fix

In Solve([math]\displaystyle{ \phi }[/math]), the subroutine Fix([math]\displaystyle{ C }[/math]) is called. We now upper bound the number of times it is called (not including the recursive calls).

Assume Fix([math]\displaystyle{ C }[/math]) always terminates.

Observation
Every clause that was satisfied before Fix([math]\displaystyle{ C }[/math]) was called will still remain satisfied and [math]\displaystyle{ C }[/math] will also be satisfied after Fix([math]\displaystyle{ C }[/math]) returns.

The observation can be proved by induction on the structure of recursion. Since there are [math]\displaystyle{ n }[/math] clauses, Solve([math]\displaystyle{ \phi }[/math]) makes at most [math]\displaystyle{ n }[/math] calls to Fix.

We then prove that Fix([math]\displaystyle{ C }[/math]) terminates.

Termination of Fix

The idea of the proof is to reconstruct a random string.

Suppose that during the running of Solve([math]\displaystyle{ \phi }[/math]), the Fix subroutine is called for [math]\displaystyle{ t }[/math] times (including all the recursive calls).

Let [math]\displaystyle{ s }[/math] be the sequence of the random bits used by Solve([math]\displaystyle{ \phi }[/math]). It is easy to see that the length of [math]\displaystyle{ s }[/math] is [math]\displaystyle{ |s|=m+tk }[/math], because the initial random assignment of [math]\displaystyle{ m }[/math] variables takes [math]\displaystyle{ m }[/math] bits, and each time of calling Fix takes [math]\displaystyle{ k }[/math] bits.

We then reconstruct [math]\displaystyle{ s }[/math] in an alternative way.

Recall that Solve([math]\displaystyle{ \phi }[/math]) calls Fix([math]\displaystyle{ C }[/math]) at top-level for at most [math]\displaystyle{ n }[/math] times. Each calling of Fix([math]\displaystyle{ C }[/math]) defines a recursion tree, rooted at clause [math]\displaystyle{ C }[/math], and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of Solve([math]\displaystyle{ \phi }[/math]) can be described by at most [math]\displaystyle{ n }[/math] recursion trees.

Observation 1
Fix a [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] recursion trees which capture the total running history of Solve([math]\displaystyle{ \phi }[/math]) can be encoded in [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits.

Each root node corresponds to a clause. There are [math]\displaystyle{ n }[/math] clauses in [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] root nodes can be represented in [math]\displaystyle{ n\log n }[/math] bits.

The smart part is how to encode the branches of the tree. Note that Fix([math]\displaystyle{ C }[/math]) will call Fix([math]\displaystyle{ D }[/math]) only for the [math]\displaystyle{ D }[/math] that shares variables with [math]\displaystyle{ C }[/math]. For a k-CNF with bounded degree [math]\displaystyle{ d }[/math], each clause [math]\displaystyle{ C }[/math] can share variables with at most [math]\displaystyle{ d }[/math] many other clauses. Thus, each branch in the recursion tree can be represented in [math]\displaystyle{ \log d }[/math] bits. There are extra [math]\displaystyle{ O(1) }[/math] bits needed to denote whether the recursion ends. So totally [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits are sufficient to encode all [math]\displaystyle{ n }[/math] recursion trees.

Observation 2
The random sequence [math]\displaystyle{ s }[/math] can be encoded in [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

With [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits, the structure of all the recursion trees can be encoded. With extra [math]\displaystyle{ m }[/math] bits, the final assignment of the [math]\displaystyle{ m }[/math] variables is stored.

We then observe that with these information, the sequence of the random bits [math]\displaystyle{ s }[/math] can be reconstructed from backwards from the final assignment.

The key step is that a clause [math]\displaystyle{ C }[/math] is only fixed when it is unsatisfied (obvious), and an unsatisfied clause [math]\displaystyle{ C }[/math] must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the [math]\displaystyle{ k }[/math] random bits in the random sequence [math]\displaystyle{ s }[/math] used in the call of the Fix corresponding to the node. Therefore, [math]\displaystyle{ s }[/math] can be reconstructed from the final assignment plus at most [math]\displaystyle{ n }[/math] recursion trees, which can be encoded in at most [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

The following theorem lies in the heart of the Kolmogorov complexity. The theorem states that random sequence is incompressible.

Theorem (Kolmogorov)
For any encoding scheme , with high probability, a random sequence [math]\displaystyle{ s }[/math] is encoded in at least [math]\displaystyle{ |s| }[/math] bits.

Applying the theorem, we have that with high probability,

[math]\displaystyle{ m+n\log n+t(\log d+O(1))\ge |s|=m+tk }[/math].

Therefore,

[math]\displaystyle{ t(k-O(1)-\log d)\le n\log n. }[/math]

In order to bound [math]\displaystyle{ t }[/math], we need

[math]\displaystyle{ k-O(1)-\log d\gt 0 }[/math],

which hold for [math]\displaystyle{ d\lt 2^{k-\alpha} }[/math] for some constant [math]\displaystyle{ \alpha\gt 0 }[/math]. In fact, in this case, [math]\displaystyle{ t=O(n\log n) }[/math], the running time of the procedure is bounded by a polynomial!

Back to the local lemma

We showed that for [math]\displaystyle{ d\lt 2^{k-O(1)} }[/math], any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.

Recall that the symmetric version of the local lemma:

Theorem (The local lemma: symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that the following hold:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ ep(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens [math]\displaystyle{ \overline{A_i} }[/math] are clauses defined on [math]\displaystyle{ k }[/math] variables. Then,

[math]\displaystyle{ p=2^{-k} }[/math]

thus, the condition [math]\displaystyle{ ep(d+1)\le 1 }[/math] means that

[math]\displaystyle{ d\lt 2^{k}/e }[/math]

which means the Moser's procedure is asymptotically optimal on the degree of dependency.