Randomized Algorithms (Spring 2010)/The probabilistic method

From TCS Wiki
Revision as of 11:24, 18 April 2010 by imported>WikiSysop (→‎Termination of Fix)
Jump to navigation Jump to search

The Basic Idea

Counting or sampling

Circuit complexity

A boolean function is a function is the form [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled [math]\displaystyle{ x_1, x_2, \ldots , x_n }[/math]. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).

Computations in Turing machines can be simulated by circuits, and any boolean function in P can be computed by a circuit with polynomially many gates. Thus, if we can find a function in NP that cannot be computed by any circuit with polynomially many gates, then NP[math]\displaystyle{ \neq }[/math]P.

The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.

Theorem (Shannon 1949)
There is a boolean function [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math] with circuit complexity greater than [math]\displaystyle{ \frac{2^n}{3n} }[/math].

Proof: There are [math]\displaystyle{ 2^{2^n} }[/math] boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Fix an integer [math]\displaystyle{ t }[/math], we then count the number of circuits with [math]\displaystyle{ t }[/math] gates. By the De Morgan's laws, we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable [math]\displaystyle{ x_i }[/math], an inverted input variable [math]\displaystyle{ \neg x_i }[/math], or the output of another gate; thus, there are at most [math]\displaystyle{ 2+2n+t-1 }[/math] possible gate inputs. It follows that the number of circuits with [math]\displaystyle{ t }[/math] gates is at most [math]\displaystyle{ 2^t(t+2n+1)^{2t} }[/math].

Uniformly choose a boolean function [math]\displaystyle{ f }[/math] at random. Note that each circuit can compute one boolean function (the converse is not true). The probability that [math]\displaystyle{ f }[/math] can be computed by a circuit with [math]\displaystyle{ t }[/math] gates is at most

[math]\displaystyle{ \frac{2^t(t+2n+1)^{2t}}{2^{2^n}}. }[/math]

If [math]\displaystyle{ t=2^n/3n }[/math], then

[math]\displaystyle{ \frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)\lt 1. }[/math]

Therefore, there exists a boolean function [math]\displaystyle{ f }[/math] which cannot be computed by any circuits with [math]\displaystyle{ 2^n/3n }[/math] gates.

[math]\displaystyle{ \square }[/math]

Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but almost all boolean functions have exponentially large circuit complexity.


Ramsey number

Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of [math]\displaystyle{ K_6 }[/math] (the complete graph on six vertices), there must be a monochromatic [math]\displaystyle{ K_3 }[/math] (a triangle whose edges have the same color).

Generally, the Ramsey number [math]\displaystyle{ R(k,\ell) }[/math] is the smallest integer [math]\displaystyle{ n }[/math] such that in any two-coloring of the edges of a complete graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ K_n }[/math] by red and blue, either there is a red [math]\displaystyle{ K_k }[/math] or there is a blue [math]\displaystyle{ K_\ell }[/math].

Ramsey showed in 1929 that [math]\displaystyle{ R(k,\ell) }[/math] is finite for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math]. It is extremely hard to compute the exact value of [math]\displaystyle{ R(k,\ell) }[/math]. Here we give a lower bound of [math]\displaystyle{ R(k,k) }[/math] by the probabilistic method.

Theorem (Erdős 1947)
If [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math] then it is possible to color the edges of [math]\displaystyle{ K_n }[/math] with two colors so that there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.

Proof: Consider a random two-coloring of edges of [math]\displaystyle{ K_n }[/math] obtained as follows:

  • For each edge of [math]\displaystyle{ K_n }[/math], independently flip a fair coin to decide the color of the edge.

For any fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ \mathcal{E}_S }[/math] be the event that the [math]\displaystyle{ K_k }[/math] subgraph induced by [math]\displaystyle{ S }[/math] is monochromatic. There are [math]\displaystyle{ {k\choose 2} }[/math] many edges in [math]\displaystyle{ K_k }[/math], therefore

[math]\displaystyle{ \Pr[\mathcal{E}_S]=2\cdot 2^{-{k\choose 2}}=2^{1-{k\choose 2}}. }[/math]

Since there are [math]\displaystyle{ {n\choose k} }[/math] possible choices of [math]\displaystyle{ S }[/math], by the union bound

[math]\displaystyle{ \Pr[\exists S, \mathcal{E}_S]\le {n\choose k}\cdot\Pr[\mathcal{E}_S]={n\choose k}\cdot 2^{1-{k\choose 2}}. }[/math]

Due to the assumption, [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math], thus there exists a two coloring that none of [math]\displaystyle{ \mathcal{E}_S }[/math] occurs, which means there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ k\ge 3 }[/math] and we take [math]\displaystyle{ n=\lfloor2^{k/2}\rfloor }[/math], then

[math]\displaystyle{ \begin{align} {n\choose k}\cdot 2^{1-{k\choose 2}} &\lt \frac{n^k}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &\le \frac{2^{k^2/2}}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &= \frac{2^{1+\frac{k}{2}}}{k!}\\ &\lt 1. \end{align} }[/math]

By the above theorem, there exists a two-coloring of [math]\displaystyle{ K_n }[/math] that there is no monochromatic [math]\displaystyle{ K_k }[/math]. Therefore, the Ramsey number [math]\displaystyle{ R(k,k)\gt \lfloor2^{k/2}\rfloor }[/math] for all [math]\displaystyle{ k\ge 3 }[/math].

Note that for sufficiently large [math]\displaystyle{ k }[/math], if [math]\displaystyle{ n= \lfloor 2^{k/2}\rfloor }[/math], then the probability that there exists a monochromatic [math]\displaystyle{ K_k }[/math] is bounded by

[math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}} \lt \frac{2^{1+\frac{k}{2}}}{k!} \ll 1, }[/math]

which means that a random two-coloring of [math]\displaystyle{ K_n }[/math] is very likely not to contain a monochromatic [math]\displaystyle{ K_{2\log n} }[/math]. This gives us a very simple randomized algorithm for finding a two-coloring of [math]\displaystyle{ K_n }[/math] without monochromatic [math]\displaystyle{ K_{2\log n} }[/math].


Blocking number

Let [math]\displaystyle{ S }[/math] be a set. Let [math]\displaystyle{ 2^{S}=\{A\mid A\subseteq S\} }[/math] be the power set of [math]\displaystyle{ S }[/math], and let [math]\displaystyle{ {S\choose k}=\{A\mid A\subseteq S\mbox{ and }|A|=k\} }[/math] be the [math]\displaystyle{ k }[/math]-uniform of [math]\displaystyle{ S }[/math].

We call [math]\displaystyle{ \mathcal{F} }[/math] a set family (or a set system) with ground set [math]\displaystyle{ S }[/math] if [math]\displaystyle{ \mathcal{F}\subseteq 2^{S} }[/math]. The members of [math]\displaystyle{ \mathcal{F} }[/math] are subsets of [math]\displaystyle{ S }[/math].

Given a set family [math]\displaystyle{ \mathcal{F} }[/math] with ground set [math]\displaystyle{ S }[/math], a set [math]\displaystyle{ T\subseteq S }[/math] is a blocking set of [math]\displaystyle{ \mathcal{F} }[/math] if all [math]\displaystyle{ A\in\mathcal{F} }[/math] have [math]\displaystyle{ A\cap T\neq \emptyset }[/math], i.e. [math]\displaystyle{ T }[/math] intersects (blocks) all member set of [math]\displaystyle{ \mathcal{F} }[/math].

Theorem
Given a set family [math]\displaystyle{ \mathcal{F}\subseteq{S\choose k} }[/math], where [math]\displaystyle{ m=|\mathcal{F}| }[/math] and [math]\displaystyle{ n=|S| }[/math], [math]\displaystyle{ \mathcal{F} }[/math] has a blocking set of size [math]\displaystyle{ \left\lceil\frac{n\ln m}{k}\right\rceil }[/math].

Proof: Let [math]\displaystyle{ \tau=\left\lceil\frac{n\ln m}{k}\right\rceil }[/math]. Let [math]\displaystyle{ T }[/math] be a set chosen uniformly at random from [math]\displaystyle{ {S\choose \tau} }[/math]. We show that [math]\displaystyle{ T }[/math] is a blocking set of [math]\displaystyle{ \mathcal{F} }[/math] with a probability >0.

Fix any [math]\displaystyle{ A\in\mathcal{F} }[/math]. Recall that [math]\displaystyle{ \mathcal{F}\subseteq{S\choose k} }[/math], thus [math]\displaystyle{ |A|=k }[/math]. And

[math]\displaystyle{ \begin{align} \Pr[A\cap T=\emptyset] &= \frac{\left|{S-T\choose \tau}\right|}{\left|{S\choose \tau}\right|}\\ &= \frac{{n-k\choose \tau}}{{n\choose\tau}}\\ &= \frac{(n-k)\cdot(n-k-1)\cdots(n-k-\tau+1)}{n\cdot(n-1)\cdots(n-\tau+1)}\\ &\lt \left(1-\frac{k}{n}\right)^{\tau}\\ &\le \exp\left(-\frac{k\tau}{n}\right)\\ &\le \frac{1}{m}. \end{align} }[/math]

By the union bound, the probability that there exists an [math]\displaystyle{ A\in\mathcal{F} }[/math] that misses [math]\displaystyle{ T }[/math]

[math]\displaystyle{ \Pr[\exists A\in\mathcal{F}, A\cap T=\emptyset]\le m\Pr[A\cap T=\emptyset]\lt m\cdot\frac{1}{m}=1. }[/math]

Thus, the probability that [math]\displaystyle{ T }[/math] is a blocking set

[math]\displaystyle{ \Pr[\forall A\in\mathcal{F}, A\cap T\neq\emptyset]\gt 0. }[/math]

There exists a blocking set of size [math]\displaystyle{ \tau=\left\lceil\frac{n\ln m}{k}\right\rceil }[/math].

[math]\displaystyle{ \square }[/math]

The theorem also hints us to a randomized algorithm. In order to make the algorithm efficient, we relax the size of [math]\displaystyle{ T }[/math] to [math]\displaystyle{ \tau=\frac{2n\ln m}{k} }[/math]. Uniformly choose [math]\displaystyle{ \tau }[/math] elements from [math]\displaystyle{ S }[/math] to form the set [math]\displaystyle{ T }[/math], by the above analysis, the probability that [math]\displaystyle{ T }[/math] is NOT a blocking set is at most

[math]\displaystyle{ m\exp\left(-\frac{n\tau}{k}\right)=m\exp(-2\ln m)=\frac{1}{m}. }[/math]

Thus, a blocking set is found with high probability.

Linearity of expectation

Maximum cut

Given an undirected graph [math]\displaystyle{ G(V,E) }[/math], a set [math]\displaystyle{ C }[/math] of edges of [math]\displaystyle{ G }[/math] is called a cut if [math]\displaystyle{ G }[/math] is disconnected after removing the edges in [math]\displaystyle{ C }[/math]. We can represent a cut by [math]\displaystyle{ c(S,T) }[/math] where [math]\displaystyle{ (S,T) }[/math] is a bipartition of the vertex set [math]\displaystyle{ V }[/math], and [math]\displaystyle{ c(S,T)=\{uv\in E\mid u\in S,v\in T\} }[/math] is the set of edges crossing between [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math].

We have seen how to compute min-cut: either by deterministic max-flow algorithm, or by Karger's randomized algorithm. On the other hand, max-cut is hard to compute, because it is NP-complete. Actually, the weighted version of max-cut is among the Karp's 21 NP-complete problems.

We now show by the probabilistic method that a max-cut always has at least half the edges.

Theorem
Given an undirected graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ n }[/math] vertices and [math]\displaystyle{ m }[/math] edges, there is a cut of size at least [math]\displaystyle{ \frac{m}{2} }[/math].

Proof: Enumerate the vertices in an arbitrary order. Partition the vertex set [math]\displaystyle{ V }[/math] into two disjoint sets [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] as follows.

For each vertex [math]\displaystyle{ v\in V }[/math],
  • independently choose one of [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] with equal probability, and let [math]\displaystyle{ v }[/math] join the chosen set.

For each vertex [math]\displaystyle{ v\in V }[/math], let [math]\displaystyle{ X_v\in\{S,T\} }[/math] be the random variable which represents the set that [math]\displaystyle{ v }[/math] joins. For each edge [math]\displaystyle{ uv\in E }[/math], let [math]\displaystyle{ Y_{uv} }[/math] be the 0-1 random variable which indicates whether [math]\displaystyle{ uv }[/math] crosses between [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math]. Clearly,

[math]\displaystyle{ \Pr[Y_{uv}=1]=\Pr[X_u=X_v]=\frac{1}{2}. }[/math]

The size of [math]\displaystyle{ c(S,T) }[/math] is given by [math]\displaystyle{ Y=\sum_{uv\in E}Y_{uv} }[/math]. By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[Y]=\sum_{uv\in E}\mathbf{E}[Y_{uv}]=\sum_{uv\in E}\Pr[Y_{uv}=1]=\frac{m}{2}. }[/math]

Therefore, there exist a bipartition [math]\displaystyle{ (S,T) }[/math] of [math]\displaystyle{ V }[/math] such that [math]\displaystyle{ |c(S,T)|\ge\frac{m}{2} }[/math], i.e. there exists a cut of [math]\displaystyle{ G }[/math] which contains at least [math]\displaystyle{ \frac{m}{2} }[/math] edges.

[math]\displaystyle{ \square }[/math]


Maximum satisfiability

Suppose that we have a number of boolean variables [math]\displaystyle{ x_1,x_2,\ldots,\in\{\mathrm{true},\mathrm{false}\} }[/math]. A literal is either a variable [math]\displaystyle{ x_i }[/math] itself or its negation [math]\displaystyle{ \neg x_i }[/math]. A logic expression is a conjunctive normal form (CNF) if it is written as the conjunction(AND) of a set of clauses, where each clause is a disjunction(OR) of literals. For example:

[math]\displaystyle{ (x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3)\wedge (x_1\vee x_2\vee x_4)\wedge (x_4\vee \neg x_3)\wedge (x_4\vee \neg x_1). }[/math]

The satisfiability (SAT) problem ask whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. The maximum satisfiability (MAXSAT) is the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.

SAT is the first problem known to be NP-complete (the Cook-Levin theorem). MAXSAT is also NP-complete. We then see that there always exists a roughly good truth assignment which satisfies half the clauses.

Theorem
For any set of [math]\displaystyle{ m }[/math] clauses, there is a truth assignment that satisfies at least [math]\displaystyle{ \frac{m}{2} }[/math] clauses.

Proof: For each variable, independently assign a random value in [math]\displaystyle{ \{\mathrm{true},\mathrm{false}\} }[/math] with equal probability. For the [math]\displaystyle{ i }[/math]th clause, let [math]\displaystyle{ X_i }[/math] be the random variable which indicates whether the [math]\displaystyle{ i }[/math]th clause is satisfied. Suppose that there are [math]\displaystyle{ k }[/math] literals in the clause. The probability that the clause is satisfied is

[math]\displaystyle{ \Pr[X_k=1]\ge(1-2^{-k})\ge\frac{1}{2} }[/math].

Let [math]\displaystyle{ X=\sum_{i=1}^m X_i }[/math] be the number of satisfied clauses. By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=1}^{m}\mathbf{E}[X_i]\ge \frac{m}{2}. }[/math]

Therefore, there exists an assignment such that at least [math]\displaystyle{ \frac{m}{2} }[/math] clauses are satisfied.

[math]\displaystyle{ \square }[/math]

Alterations

Independent sets

An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.

Theorem
Let [math]\displaystyle{ G(V,E) }[/math] be a graph on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ m }[/math] edges. Then [math]\displaystyle{ G }[/math] has an independent set with at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.

Proof: Let [math]\displaystyle{ S }[/math] be a set of vertices constructed as follows:

For each vertex [math]\displaystyle{ v\in V }[/math]:
  • [math]\displaystyle{ v }[/math] is included in [math]\displaystyle{ S }[/math] independently with probability [math]\displaystyle{ p }[/math],

[math]\displaystyle{ p }[/math] to be determined.

Let [math]\displaystyle{ X=|S| }[/math]. It is obvious that [math]\displaystyle{ \mathbf{E}[X]=np }[/math].

For each edge [math]\displaystyle{ e\in E }[/math], let [math]\displaystyle{ Y_{e} }[/math] be the random variable which indicates whether both endpoints of [math]\displaystyle{ }[/math] are in [math]\displaystyle{ S }[/math].

[math]\displaystyle{ \mathbf{E}[Y_{uv}]=\Pr[u\in S\wedge v\in S]=p^2. }[/math]

Let [math]\displaystyle{ Y }[/math] be the number of edges in the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]. It holds that [math]\displaystyle{ Y=\sum_{e\in E}Y_e }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[Y]=\sum_{e\in E}\mathbf{E}[Y_e]=mp^2 }[/math].

Note that although [math]\displaystyle{ S }[/math] is not necessary an independent set, it can be modified to one if for each edge [math]\displaystyle{ e }[/math] of the induced subgraph [math]\displaystyle{ G(S) }[/math], we delete one of the endpoint of [math]\displaystyle{ e }[/math] from [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ S^* }[/math] be the resulting set. It is obvious that [math]\displaystyle{ S^* }[/math] is an independent set since there is no edge left in the induced subgraph [math]\displaystyle{ G(S^*) }[/math].

Since there are [math]\displaystyle{ Y }[/math] edges in [math]\displaystyle{ G(S) }[/math], there are at most [math]\displaystyle{ Y }[/math] vertices in [math]\displaystyle{ S }[/math] are deleted to make it become [math]\displaystyle{ S^* }[/math]. Therefore, [math]\displaystyle{ |S^*|\ge X-Y }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge\mathbf{E}[X-Y]=\mathbf{E}[X]-\mathbf{E}[Y]=np-mp^2. }[/math]

The expectation is maximized when [math]\displaystyle{ p=\frac{n}{2m} }[/math], thus

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge n\cdot\frac{n}{2m}-m\left(\frac{n}{2m}\right)^2=\frac{n^2}{4m}. }[/math]

There exists an independent set which contains at least [math]\displaystyle{ \frac{n}{4m} }[/math] vertices.

[math]\displaystyle{ \square }[/math]

The proof actually propose a randomized algorithm for constructing large independent set:

Given a graph on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ m }[/math] edges, let [math]\displaystyle{ d=\frac{2m}{n} }[/math] be the average degree.
  1. For each vertex [math]\displaystyle{ v\in S }[/math], [math]\displaystyle{ v }[/math] is included in [math]\displaystyle{ S }[/math] independently with probability [math]\displaystyle{ \frac{1}{d} }[/math].
  2. For each remaining edge in the induced subgraph [math]\displaystyle{ G(S) }[/math], remove one of the endpoints from [math]\displaystyle{ S }[/math].

Let [math]\displaystyle{ S^* }[/math] be the resulting set. We have shown that [math]\displaystyle{ S^* }[/math] is an independent set and [math]\displaystyle{ \mathbf{E}[|S^*|]\ge\frac{n^2}{4m} }[/math].

Intersecting families

An [math]\displaystyle{ \mathcal{F}\subseteq 2^S }[/math] is an intersecting family if for any [math]\displaystyle{ A,B\in\mathcal{F} }[/math] it holds that [math]\displaystyle{ A\cap B\neq\emptyset }[/math].

Suppose that [math]\displaystyle{ n\ge 2k }[/math]. For [math]\displaystyle{ \mathcal{F}\subseteq{S\choose k} }[/math], where [math]\displaystyle{ |S|=n }[/math], we can let all [math]\displaystyle{ A\in \mathcal{F} }[/math] contain one common element [math]\displaystyle{ a\in S }[/math] and [math]\displaystyle{ A-\{a\} }[/math] enumerates all [math]\displaystyle{ {n-1\choose k-1} }[/math] possible combinations of [math]\displaystyle{ (k-1) }[/math] elements in [math]\displaystyle{ S }[/math]. This gives us an intersecting family of size [math]\displaystyle{ |\mathcal{F}|={n-1\choose k-1} }[/math].

The following theorem says that this is the largest possible cardinality an intersecting [math]\displaystyle{ \mathcal{F} }[/math] can achieve. The theorem was first proved by Erdős, Ko, and Rado in 1938, but published 23 years later. It is a fundamental result in the area of extremal set theory, which studies the maximum (or minimum) possible cardinality of a set system satisfying certain structural assumption. In this example, the structural assumption is intersecting.

Here we present a probabilistic proof by Katona.

Theorem (Erdős-Ko-Rado 1961)
Let [math]\displaystyle{ \mathcal{F}\subseteq{S\choose k} }[/math], where [math]\displaystyle{ |S|=n }[/math] and [math]\displaystyle{ n\ge 2k }[/math]. If [math]\displaystyle{ \mathcal{F} }[/math] is an intersecting family then [math]\displaystyle{ |\mathcal{F}|\le{n-1\choose k-1} }[/math].

Proof (due to Katona 1972).

Without loss of generality, let [math]\displaystyle{ S=[n] }[/math]. For [math]\displaystyle{ i\in[n] }[/math], let [math]\displaystyle{ A_i=\{(i+j)\bmod n\mid j\in[k]\} }[/math]. Then we make the following claim.

Claim 1: [math]\displaystyle{ \mathcal{F} }[/math] can contain at most [math]\displaystyle{ k }[/math] many [math]\displaystyle{ A_i }[/math].

The claim can be easily proved by observing that for any [math]\displaystyle{ i,j\in[n] }[/math] that [math]\displaystyle{ i\lt j }[/math], [math]\displaystyle{ A_i }[/math] and [math]\displaystyle{ A_j }[/math] are disjoint if [math]\displaystyle{ j-i\gt k }[/math], thus in order to make [math]\displaystyle{ \mathcal{F} }[/math] intersecting, all [math]\displaystyle{ A_i,A_j\in\mathcal{F} }[/math] have [math]\displaystyle{ |i-j|\le k }[/math]. This is violated once there are more than [math]\displaystyle{ k }[/math] many [math]\displaystyle{ A_i }[/math] in [math]\displaystyle{ \mathcal{F} }[/math].

Now we prove the Erdős-Ko-Rado theorem. Let a permutation [math]\displaystyle{ \sigma }[/math] of [math]\displaystyle{ [n] }[/math] and an integer [math]\displaystyle{ i\in[n] }[/math] be chosen uniformly and independently at random. Let

[math]\displaystyle{ R=\{\sigma((i+j)\bmod n)\mid j\in[k]\}, \quad\mbox{ or equivalently }R=\sigma(A_i) }[/math].

By Claim 1, for any fixed permutation [math]\displaystyle{ \sigma }[/math], the family [math]\displaystyle{ \mathcal{F} }[/math] can contain at most [math]\displaystyle{ k }[/math] of the sets [math]\displaystyle{ \sigma(A_i) }[/math], thus conditioning on any particular [math]\displaystyle{ \sigma }[/math], [math]\displaystyle{ \Pr[R\in\mathcal{F}\mid \sigma]\le\frac{k}{n} }[/math]. Hence

[math]\displaystyle{ \Pr[R\in\mathcal{F}]\le\frac{k}{n}. }[/math]

On the other hand, by our construction, [math]\displaystyle{ R }[/math] is uniformly chosen from [math]\displaystyle{ {S\choose k} }[/math], thus

[math]\displaystyle{ \Pr[R\in\mathcal{F}]=\frac{|\mathcal{F}|}{{n\choose k}}. }[/math]

Therefore, [math]\displaystyle{ |\mathcal{F}|\le\frac{k}{n}{n\choose k}={n-1\choose k-1}. }[/math]

[math]\displaystyle{ \square }[/math]

The Lovász Local Lemma

Consider a set of "bad" events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math]. Suppose that [math]\displaystyle{ \Pr[A_i]\le p }[/math] for all [math]\displaystyle{ 1\le i\le n }[/math]. We want to show that there is a situation that none of the bad events occurs. Due to the probabilistic method, we need to prove that

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0. }[/math]
Case 1: mutually independent events.

If all the bad events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] are mutually independent, then

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge(1-p)^n\gt 0, }[/math]

for any [math]\displaystyle{ p\lt 1 }[/math].

Case 2: arbitrarily dependent events.

On the other hand, if we put no assumption on the dependencies between the events, then by the union bound (which holds unconditionally),

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=1-\Pr\left[\bigvee_{i=1}^n A_i\right]\ge 1-np, }[/math]

which is not an interesting bound for [math]\displaystyle{ p\ge\frac{1}{n} }[/math]. If we make no further assumption on the dependencies between the events, this bound is tight.

Example
Consider that a ball is uniformly thrown into one of the [math]\displaystyle{ (n+1) }[/math] bins. Let the "bad" events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be defined as that [math]\displaystyle{ A_i }[/math] represents that the ball falls into the [math]\displaystyle{ i }[/math]th bin. The only good event is that the ball falls into the [math]\displaystyle{ (n+1) }[/math]th bin. Clearly, [math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=1-n\cdot\frac{1}{n+1} }[/math]. Thus the above union bound is achieved.

This example shows that dependencies between the events could cause troubles.


We would like to know what is going on between the two extreme cases: mutually independent events, and arbitrarily dependent events. The Lovász local lemma provides such a tool.

The local lemma

The local lemma is powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a dependency graph.

Definition
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events. A graph [math]\displaystyle{ D=(V,E) }[/math] on the set of vertices [math]\displaystyle{ V=\{1,2,\ldots,n\} }[/math] is called a dependency graph for the events [math]\displaystyle{ A_1,\ldots,A_n }[/math] if for each [math]\displaystyle{ i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], the event [math]\displaystyle{ A_i }[/math] is mutually independent of all the events [math]\displaystyle{ \{A_j\mid (i,j)\not\in E\} }[/math].
Example
Let [math]\displaystyle{ X_1,X_2,\ldots,X_m }[/math] be a set of mutually independent random variables. Each event [math]\displaystyle{ A_i }[/math] is a predicate defined on a number of variables among [math]\displaystyle{ X_1,X_2,\ldots,X_m }[/math]. Let [math]\displaystyle{ v(A_i) }[/math] be the unique smallest set of variables which determine [math]\displaystyle{ A_i }[/math]. The dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined by
[math]\displaystyle{ (i,j)\in E }[/math] iff [math]\displaystyle{ v(A_i)\cap v(A_j)\neq \emptyset }[/math].

This construction gives a general framework for the probability spaces with limited dependencies and is central to the constructive proof of the Lovász local lemma. In this example, each event is a predicate of variables, and the events are dependent if they depend on some common events.


The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.

Theorem (The local lemma: general case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events. Suppose that [math]\displaystyle{ D=(V,E) }[/math] is a dependency graph for the events and suppose there are real numbers [math]\displaystyle{ x_1,x_2,\ldots, x_n }[/math] such that [math]\displaystyle{ 0\le x_i\lt 1 }[/math] and for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ \Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j) }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i) }[/math].

The following is a special case, the symmetric version of the Lovász local lemma.

Theorem (The local lemma: symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that the following hold:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ ep(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

The original proof of the local lemma is by induction. Here we will present a constructive proof for a special case, which is more algorithmic than the original proof. This proof is due to Moser, first published in his talk in STOC 2009, and later a generalized version collaborated with Tardos appears in JACM 2010.

Moser's proof

We consider a restrictive case.

Let [math]\displaystyle{ X_1,X_2,\ldots,X_m\in\{\mathrm{true},\mathrm{false}\} }[/math] be a set of mutually independent random variables which assume boolean values. Each event [math]\displaystyle{ A_i }[/math] is an AND of at most [math]\displaystyle{ k }[/math] literals ([math]\displaystyle{ X_i }[/math] or [math]\displaystyle{ \neg X_i }[/math]). Let [math]\displaystyle{ v(A_i) }[/math] be the set of the [math]\displaystyle{ k }[/math] variables that [math]\displaystyle{ A_i }[/math] depends on. The probability that none of the bad events occurs is

[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n \overline{A_i}\right]. }[/math]

In this particular model, the dependency graph [math]\displaystyle{ D=(V,E) }[/math] is defined as that [math]\displaystyle{ (i,j)\in E }[/math] iff [math]\displaystyle{ v(A_i)\cap v(A_j)\neq \emptyset }[/math].

Observe that [math]\displaystyle{ \overline{A_i} }[/math] is a clause (OR of literals). Thus, [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is a [math]\displaystyle{ k }[/math]-CNF, the CNF that each clause depends on at most [math]\displaystyle{ k }[/math] variables. The probability

[math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i}\gt 0 }[/math]

means that the the [math]\displaystyle{ k }[/math]-CNF [math]\displaystyle{ \bigwedge_{i=1}^n \overline{A_i} }[/math] is satisfiable.

The satisfiability of [math]\displaystyle{ k }[/math]-CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first NP-complete problem (the Cook-Levin theorem). Given the current suspect on NP vs P, we do not expect to solve this problem generally.

However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most [math]\displaystyle{ d }[/math] other clauses. We call a [math]\displaystyle{ k }[/math]-CNF with this property a [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math].

Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:

Problem
Find a condition on [math]\displaystyle{ k }[/math] and [math]\displaystyle{ d }[/math], such that any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable.

In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.

Let [math]\displaystyle{ \phi }[/math] be a [math]\displaystyle{ k }[/math]-CNF of [math]\displaystyle{ n }[/math] clauses with bounded degree [math]\displaystyle{ d }[/math], defined on variables [math]\displaystyle{ X_1,\ldots,X_m }[/math]. The following procedure find a satisfiable assignment for [math]\displaystyle{ \phi }[/math].

Solve([math]\displaystyle{ \phi }[/math])
Pick a random assignment of [math]\displaystyle{ X_1,\ldots,X_m }[/math].
While there is an unsatisfied clause [math]\displaystyle{ C }[/math] in [math]\displaystyle{ \phi }[/math]
Fix([math]\displaystyle{ C }[/math]).

The sub-routine Fix is defined as follows:

Fix([math]\displaystyle{ C }[/math])
Replace the variables in [math]\displaystyle{ v(C) }[/math] with new random values.
While there is unsatisfied clause [math]\displaystyle{ D }[/math] that [math]\displaystyle{ v(C)\cap v(D)\neq \emptyset }[/math]
Fix([math]\displaystyle{ D }[/math]).

The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.

We then prove it works.

Number of top-level callings of Fix

In Solve([math]\displaystyle{ \phi }[/math]), the subroutine Fix([math]\displaystyle{ C }[/math]) is called. We now upper bound the number of times it is called (not including the recursive calls).

Assume Fix([math]\displaystyle{ C }[/math]) always terminates.

Observation
Every clause that was satisfied before Fix([math]\displaystyle{ C }[/math]) was called will still remain satisfied and [math]\displaystyle{ C }[/math] will also be satisfied after Fix([math]\displaystyle{ C }[/math]) returns.

The observation can be proved by induction on the structure of recursion. Since there are [math]\displaystyle{ n }[/math] clauses, Solve([math]\displaystyle{ \phi }[/math]) makes at most [math]\displaystyle{ n }[/math] calls to Fix.

We then prove that Fix([math]\displaystyle{ C }[/math]) terminates.

Termination of Fix

The idea of the proof is to reconstruct a random string.

Suppose that during the running of Solve([math]\displaystyle{ \phi }[/math]), the Fix subroutine is called for [math]\displaystyle{ t }[/math] times (including all the recursive calls).

Let [math]\displaystyle{ s }[/math] be the sequence of the random bits used by Solve([math]\displaystyle{ \phi }[/math]). It is easy to see that the length of [math]\displaystyle{ s }[/math] is [math]\displaystyle{ |s|=m+tk }[/math], because the initial random assignment of [math]\displaystyle{ m }[/math] variables takes [math]\displaystyle{ m }[/math] bits, and each time of calling Fix takes [math]\displaystyle{ k }[/math] bits.

We then reconstruct [math]\displaystyle{ s }[/math] in an alternative way.

Recall that Solve([math]\displaystyle{ \phi }[/math]) calls Fix([math]\displaystyle{ C }[/math]) at top-level for at most [math]\displaystyle{ n }[/math] times. Each calling of Fix([math]\displaystyle{ C }[/math]) defines a recursion tree, rooted at clause [math]\displaystyle{ C }[/math], and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of Solve([math]\displaystyle{ \phi }[/math]) can be described by at most [math]\displaystyle{ n }[/math] recursion trees.

Observation 1
Fix a [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] recursion trees which capture the total running history of Solve([math]\displaystyle{ \phi }[/math]) can be encoded in [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits.

Each root node corresponds to a clause. There are [math]\displaystyle{ n }[/math] clauses in [math]\displaystyle{ \phi }[/math]. The [math]\displaystyle{ n }[/math] root nodes can be represented in [math]\displaystyle{ n\log n }[/math] bits.

The smart part is how to encode the branches of the tree. Note that Fix([math]\displaystyle{ C }[/math]) will call Fix([math]\displaystyle{ D }[/math]) only for the [math]\displaystyle{ D }[/math] that shares variables with [math]\displaystyle{ C }[/math]. For a k-CNF with bounded degree [math]\displaystyle{ d }[/math], each clause [math]\displaystyle{ C }[/math] can share variables with at most [math]\displaystyle{ d }[/math] many other clauses. Thus, each branch in the recursion tree can be represented in [math]\displaystyle{ \log d }[/math] bits. There are extra [math]\displaystyle{ O(1) }[/math] bits needed to denote whether the recursion ends. So totally [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits are sufficient to encode all [math]\displaystyle{ m }[/math] recursion trees.

Observation 2
The random sequence [math]\displaystyle{ s }[/math] can be encoded in [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

With [math]\displaystyle{ n\log n+t(\log d+O(1)) }[/math] bits, the structure of all the recursion trees can be encoded. With extra [math]\displaystyle{ m }[/math] bits, the final assignment of the [math]\displaystyle{ m }[/math] variables is stored.

We then observe that with these information, the sequence of the random bits [math]\displaystyle{ s }[/math] can be reconstructed from backwards from the final assignment.

The key step is that a clause [math]\displaystyle{ C }[/math] is only fixed when it is unsatisfied (obvious), and an unsatisfied clause [math]\displaystyle{ C }[/math] must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the [math]\displaystyle{ k }[/math] random bits in the random sequence [math]\displaystyle{ s }[/math] used in the call of the Fix corresponding to the node. Therefore, [math]\displaystyle{ s }[/math] can be reconstructed from the final assignment plus at most [math]\displaystyle{ m }[/math] recursion trees, which can be encoded in at most [math]\displaystyle{ m+n\log n+t(\log d+O(1)) }[/math] bits.

The following theorem lies in the heart of the Kolmogorov complexity. The theorem states that random sequence is incompressible.

Theorem (Kolmogorov)
For any encoding scheme , with high probability, a random sequence [math]\displaystyle{ s }[/math] is encoded in at least [math]\displaystyle{ |s| }[/math] bits.

Applying the theorem, we have that with high probability,

[math]\displaystyle{ m+n\log n+t(\log d+O(1))\ge |s|=m+tk }[/math].

Therefore,

[math]\displaystyle{ t(k-O(1)-\log d)\le n\log n. }[/math]

In order to bound [math]\displaystyle{ t }[/math], we need

[math]\displaystyle{ k-O(1)-\log d\gt 0 }[/math],

which hold for [math]\displaystyle{ d\lt 2^{k-\alpha} }[/math] for some constant [math]\displaystyle{ \alpha\gt 0 }[/math]. In fact, in this case, [math]\displaystyle{ t=O(n\log n) }[/math], the running time of the procedure is bounded by a polynomial!

Back to the local lemma

We showed that for [math]\displaystyle{ d\gt 2^{k-O(1)} }[/math], any [math]\displaystyle{ k }[/math]-CNF with bounded degree [math]\displaystyle{ d }[/math] is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.

Recall that the symmetric version of the local lemma:

Theorem (The local lemma: symmetric case)
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a set of events, and assume that the following hold:
  1. for all [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ \Pr[A_i]\le p }[/math];
  2. the maximum degree of the dependency graph for the events [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is [math]\displaystyle{ d }[/math], and
[math]\displaystyle{ ep(d+1)\le 1 }[/math].
Then
[math]\displaystyle{ \Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\gt 0 }[/math].

Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens [math]\displaystyle{ \overline{A_i} }[/math] are clauses defined on [math]\displaystyle{ k }[/math] variables. Then,

[math]\displaystyle{ p=2^{-k} }[/math]

thus, the condition [math]\displaystyle{ ep(d+1)\le 1 }[/math] means that

[math]\displaystyle{ d\lt 2^{-k}/e }[/math]

which means the Moser's procedure is asymptotically optimal on the degree of dependency.