Randomized Algorithms (Spring 2010)/The probabilistic method

From EtoneWiki
Jump to: navigation, search

The Basic Idea

Suppose we want prove the existence of mathematic objects with certain properties. One way to do so is to explicitly construct such an object. This kind of proofs can be interpreted as deterministic algorithms which find the object with desirable properties.

The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.

The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:

  • If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
  • Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
For example, if we know the average height of the students in the class is , then we know there is a students whose height is at least , and there is a student whose height is at most .

Although the idea of the probabilistic method is simple, it provides us a powerful tool for existential proof. In same cases, the proof itself is a randomized algorithm, and if we are lucky, the algorithm could be very efficient.

Counting or sampling

Circuit complexity

A boolean function is a function is the form .

Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled . A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).

Computations in Turing machines can be simulated by circuits, and any boolean function in P can be computed by a circuit with polynomially many gates. Thus, if we can find a function in NP that cannot be computed by any circuit with polynomially many gates, then NPP.

The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.

Theorem (Shannon 1949)
There is a boolean function with circuit complexity greater than .
There are boolean functions .

Fix an integer , we then count the number of circuits with gates. By the De Morgan's laws, we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable , an inverted input variable , or the output of another gate; thus, there are at most possible gate inputs. It follows that the number of circuits with gates is at most .

Uniformly choose a boolean function at random. Note that each circuit can compute one boolean function (the converse is not true). The probability that can be computed by a circuit with gates is at most

If , then

Therefore, there exists a boolean function which cannot be computed by any circuits with gates.

Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but almost all boolean functions have exponentially large circuit complexity.

Ramsey number

Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of (the complete graph on six vertices), there must be a monochromatic (a triangle whose edges have the same color).

Generally, the Ramsey number is the smallest integer such that in any two-coloring of the edges of a complete graph on vertices by red and blue, either there is a red or there is a blue .

Ramsey showed in 1929 that is finite for any and . It is extremely hard to compute the exact value of . Here we give a lower bound of by the probabilistic method.

Theorem (Erdős 1947)
If then it is possible to color the edges of with two colors so that there is no monochromatic subgraph.
Consider a random two-coloring of edges of obtained as follows:
  • For each edge of , independently flip a fair coin to decide the color of the edge.

For any fixed set of vertices, let be the event that the subgraph induced by is monochromatic. There are many edges in , therefore

Since there are possible choices of , by the union bound

Due to the assumption, , thus there exists a two coloring that none of occurs, which means there is no monochromatic subgraph.

For and we take , then

By the above theorem, there exists a two-coloring of that there is no monochromatic . Therefore, the Ramsey number for all .

Note that for sufficiently large , if , then the probability that there exists a monochromatic is bounded by

which means that a random two-coloring of is very likely not to contain a monochromatic . This gives us a very simple randomized algorithm for finding a two-coloring of without monochromatic .

Blocking number

Let be a set. Let be the power set of , and let be the -uniform of .

We call a set family (or a set system) with ground set if . The members of are subsets of .

Given a set family with ground set , a set is a blocking set of if all have , i.e. intersects (blocks) all member set of .

Given a set family , where and , has a blocking set of size .
Let . Let be a set chosen uniformly at random from . We show that is a blocking set of with a probability >0.

Fix any . Recall that , thus . And

By the union bound, the probability that there exists an that misses

Thus, the probability that is a blocking set

There exists a blocking set of size .

The theorem also hints us to a randomized algorithm. In order to make the algorithm efficient, we relax the size of to . Uniformly choose elements from to form the set , by the above analysis, the probability that is NOT a blocking set is at most

Thus, a blocking set is found with high probability.

Linearity of expectation

Maximum cut

Given an undirected graph , a set of edges of is called a cut if is disconnected after removing the edges in . We can represent a cut by where is a bipartition of the vertex set , and is the set of edges crossing between and .

We have seen how to compute min-cut: either by deterministic max-flow algorithm, or by Karger's randomized algorithm. On the other hand, max-cut is hard to compute, because it is NP-complete. Actually, the weighted version of max-cut is among the Karp's 21 NP-complete problems.

We now show by the probabilistic method that a max-cut always has at least half the edges.

Given an undirected graph with vertices and edges, there is a cut of size at least .
Enumerate the vertices in an arbitrary order. Partition the vertex set into two disjoint sets and as follows.
For each vertex ,
  • independently choose one of and with equal probability, and let join the chosen set.

For each vertex , let be the random variable which represents the set that joins. For each edge , let be the 0-1 random variable which indicates whether crosses between and . Clearly,

The size of is given by . By the linearity of expectation,

Therefore, there exist a bipartition of such that , i.e. there exists a cut of which contains at least edges.

Maximum satisfiability

Suppose that we have a number of boolean variables . A literal is either a variable itself or its negation . A logic expression is a conjunctive normal form (CNF) if it is written as the conjunction(AND) of a set of clauses, where each clause is a disjunction(OR) of literals. For example:

The satisfiability (SAT) problem ask whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. The maximum satisfiability (MAXSAT) is the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.

SAT is the first problem known to be NP-complete (the Cook-Levin theorem). MAXSAT is also NP-complete. We then see that there always exists a roughly good truth assignment which satisfies half the clauses.

For any set of clauses, there is a truth assignment that satisfies at least clauses.
For each variable, independently assign a random value in with equal probability. For the th clause, let be the random variable which indicates whether the th clause is satisfied. Suppose that there are literals in the clause. The probability that the clause is satisfied is

Let be the number of satisfied clauses. By the linearity of expectation,

Therefore, there exists an assignment such that at least clauses are satisfied.


Independent sets

An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.

Let be a graph on vertices with edges. Then has an independent set with at least vertices.
Let be a set of vertices constructed as follows:
For each vertex :
  • is included in independently with probability ,

to be determined.

Let . It is obvious that .

For each edge , let be the random variable which indicates whether both endpoints of are in .

Let be the number of edges in the subgraph of induced by . It holds that . By linearity of expectation,


Note that although is not necessary an independent set, it can be modified to one if for each edge of the induced subgraph , we delete one of the endpoint of from . Let be the resulting set. It is obvious that is an independent set since there is no edge left in the induced subgraph .

Since there are edges in , there are at most vertices in are deleted to make it become . Therefore, . By linearity of expectation,

The expectation is maximized when , thus

There exists an independent set which contains at least vertices.

The proof actually propose a randomized algorithm for constructing large independent set:


Given a graph on vertices with edges, let be the average degree.

  1. For each vertex , is included in independently with probability .
  2. For each remaining edge in the induced subgraph , remove one of the endpoints from .

Let be the resulting set. We have shown that is an independent set and .

Intersecting families

An is an intersecting family if for any it holds that .

Suppose that . For , where , we can let all contain one common element and enumerates all possible combinations of elements in . This gives us an intersecting family of size .

The following theorem says that this is the largest possible cardinality an intersecting can achieve. The theorem was first proved by Erdős, Ko, and Rado in 1938, but published 23 years later. It is a fundamental result in the area of extremal set theory, which studies the maximum (or minimum) possible cardinality of a set system satisfying certain structural assumption. In this example, the structural assumption is intersecting.

Here we present a probabilistic proof by Katona.

Theorem (Erdős-Ko-Rado 1961)
Let , where and . If is an intersecting family then .
(due to Katona 1972).

Without loss of generality, let . For , let . Then we make the following claim.

Claim 1: can contain at most many .

The claim can be easily proved by observing that for any that , and are disjoint if , thus in order to make intersecting, all have . This is violated once there are more than many in .

Now we prove the Erdős-Ko-Rado theorem. Let a permutation of and an integer be chosen uniformly and independently at random. Let


By Claim 1, for any fixed permutation , the family can contain at most of the sets , thus conditioning on any particular , . Hence

On the other hand, by our construction, is uniformly chosen from , thus


The Lovász Local Lemma

Consider a set of "bad" events . Suppose that for all . We want to show that there is a situation that none of the bad events occurs. Due to the probabilistic method, we need to prove that

Case 1: mutually independent events.

If all the bad events are mutually independent, then

for any .

Case 2: arbitrarily dependent events.

On the other hand, if we put no assumption on the dependencies between the events, then by the union bound (which holds unconditionally),

which is not an interesting bound for . If we make no further assumption on the dependencies between the events, this bound is tight.

Consider that a ball is uniformly thrown into one of the bins. Let the "bad" events be defined as that represents that the ball falls into the th bin. The only good event is that the ball falls into the th bin. Clearly, . Thus the above union bound is achieved.

This example shows that dependencies between the events could cause troubles.

We would like to know what is going on between the two extreme cases: mutually independent events, and arbitrarily dependent events. The Lovász local lemma provides such a tool.

The local lemma

The local lemma is powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a dependency graph.

Let be a set of events. A graph on the set of vertices is called a dependency graph for the events if for each , , the event is mutually independent of all the events .
Let be a set of mutually independent random variables. Each event is a predicate defined on a number of variables among . Let be the unique smallest set of variables which determine . The dependency graph is defined by
iff .

This construction gives a general framework for the probability spaces with limited dependencies and is central to the constructive proof of the Lovász local lemma. In this example, each event is a predicate of variables, and the events are dependent if they depend on some common events.

The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.

Theorem (The local lemma: general case)
Let be a set of events. Suppose that is a dependency graph for the events and suppose there are real numbers such that and for all ,

The following is a special case, the symmetric version of the Lovász local lemma.

Theorem (The local lemma: symmetric case)
Let be a set of events, and assume that the following hold:
  1. for all , ;
  2. the maximum degree of the dependency graph for the events is , and

The original proof of the local lemma is by induction. Here we will present a constructive proof for a special case, which is more algorithmic than the original proof. This proof is due to Moser, first published in his talk in STOC 2009, and later a generalized version collaborated with Tardos appears in JACM 2010.

Moser's proof

We consider a restrictive case.

Let be a set of mutually independent random variables which assume boolean values. Each event is an AND of at most literals ( or ). Let be the set of the variables that depends on. The probability that none of the bad events occurs is

In this particular model, the dependency graph is defined as that iff .

Observe that is a clause (OR of literals). Thus, is a -CNF, the CNF that each clause depends on variables. The probability

means that the the -CNF is satisfiable.

The satisfiability of -CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first NP-complete problem (the Cook-Levin theorem). Given the current suspect on NP vs P, we do not expect to solve this problem generally.

However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most other clauses. We call a -CNF with this property a -CNF with bounded degree .

Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:

Find a condition on and , such that any -CNF with bounded degree is satisfiable.

In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.

Let be a -CNF of clauses with bounded degree , defined on variables . The following procedure find a satisfiable assignment for .

Pick a random assignment of .
While there is an unsatisfied clause in

The sub-routine Fix is defined as follows:

Replace the variables in with new random values.
While there is unsatisfied clause that

The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.

We then prove it works.

Number of top-level callings of Fix

In Solve(), the subroutine Fix() is called. We now upper bound the number of times it is called (not including the recursive calls).

Assume Fix() always terminates.

Every clause that was satisfied before Fix() was called will still remain satisfied and will also be satisfied after Fix() returns.

The observation can be proved by induction on the structure of recursion. Since there are clauses, Solve() makes at most calls to Fix.

We then prove that Fix() terminates.

Termination of Fix

The idea of the proof is to reconstruct a random string.

Suppose that during the running of Solve(), the Fix subroutine is called for times (including all the recursive calls).

Let be the sequence of the random bits used by Solve(). It is easy to see that the length of is , because the initial random assignment of variables takes bits, and each time of calling Fix takes bits.

We then reconstruct in an alternative way.

Recall that Solve() calls Fix() at top-level for at most times. Each calling of Fix() defines a recursion tree, rooted at clause , and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of Solve() can be described by at most recursion trees.

Observation 1
Fix a . The recursion trees which capture the total running history of Solve() can be encoded in bits.

Each root node corresponds to a clause. There are clauses in . The root nodes can be represented in bits.

The smart part is how to encode the branches of the tree. Note that Fix() will call Fix() only for the that shares variables with . For a k-CNF with bounded degree , each clause can share variables with at most many other clauses. Thus, each branch in the recursion tree can be represented in bits. There are extra bits needed to denote whether the recursion ends. So totally bits are sufficient to encode all recursion trees.

Observation 2
The random sequence can be encoded in bits.

With bits, the structure of all the recursion trees can be encoded. With extra bits, the final assignment of the variables is stored.

We then observe that with these information, the sequence of the random bits can be reconstructed from backwards from the final assignment.

The key step is that a clause is only fixed when it is unsatisfied (obvious), and an unsatisfied clause must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the random bits in the random sequence used in the call of the Fix corresponding to the node. Therefore, can be reconstructed from the final assignment plus at most recursion trees, which can be encoded in at most bits.

The following theorem lies in the heart of the Kolmogorov complexity. The theorem states that random sequence is incompressible.

Theorem (Kolmogorov)
For any encoding scheme , with high probability, a random sequence is encoded in at least bits.

Applying the theorem, we have that with high probability,



In order to bound , we need


which hold for for some constant . In fact, in this case, , the running time of the procedure is bounded by a polynomial!

Back to the local lemma

We showed that for , any -CNF with bounded degree is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.

Recall that the symmetric version of the local lemma:

Theorem (The local lemma: symmetric case)
Let be a set of events, and assume that the following hold:
  1. for all , ;
  2. the maximum degree of the dependency graph for the events is , and

Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens are clauses defined on variables. Then,

thus, the condition means that

which means the Moser's procedure is asymptotically optimal on the degree of dependency.