随机算法 (Spring 2014)/The Probabilistic Method

From EtoneWiki
Jump to: navigation, search

MAX-SAT

Suppose that we have a number of boolean variables . A literal is either a variable itself or its negation . A logic expression is a conjunctive normal form (CNF) if it is written as the conjunction(AND) of a set of clauses, where each clause is a disjunction(OR) of literals. For example:

The satisfiability (SAT) problem is that given as input a CNF formula decide whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. SAT is the first problem known to be NP-complete (the Cook-Levin theorem).

We consider the the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.

Problem (MAX-SAT)
Given a conjunctive normal form (CNF) formula of clauses defined on boolean variables , find a truth assignment to the boolean variables that maximizes the number of satisfied clauses.

The Probabilistic Method

A straightforward way to solve Max-SAT is to uniformly and independently assign each variable a random truth assignment. The following theorem is proved by the probabilistic method.

Theorem
For any set of clauses, there is a truth assignment that satisfies at least clauses.
Proof.
For each variable, independently assign a random value in with equal probability. For the th clause, let be the random variable which indicates whether the th clause is satisfied. Suppose that there are literals in the clause. The probability that the clause is satisfied is
.

Let be the number of satisfied clauses. By the linearity of expectation,

Therefore, there exists an assignment such that at least clauses are satisfied.

Note that this gives a randomized algorithm which returns a truth assignment satisfying at least clauses in expectation. There are totally clauses, thus the optimal solution is at most , which means that this simple randomized algorithm is a -approximation algorithm for the MAX-CUT problem.

LP Relaxation + Randomized Rounding

For a clause , let be the set of indices of the variables that appear in the uncomplemented form in clause , and let be the set of indices of the variables that appear in the complemented form in clause . The Max-SAT problem can be formulated as the following integer linear programing.

Each in the programing indicates the truth assignment to the variable , and each indicates whether the claus is satisfied. The inequalities ensure that a clause is deemed to be true only if at least one of the literals in the clause is assigned the value 1.

The integer linear programming is relaxed to the following linear programming:

Let and be the fractional optimal solutions to the above linear programming. Clearly, is an upper bound on the optimal number of satisfied clauses, i.e. we have

.

Apply a very natural randomized rounding scheme. For each , independently

Correspondingly, each is assigned to TRUE independently with probability .

Lemma
Let be a clause with literals. The probability that it is satisfied by randomized rounding is at least
.
Proof.
Without loss of generality, we assume that all variables appear in in the uncomplemented form, and we assume that
.

The complemented cases are symmetric.

Clause remains unsatisfied by randomized rounding only if every one of , , is assigned to FALSE, which corresponds to that every one of , , is rounded to 0. This event occurs with probability . Therefore, the clause is satisfied by the randomized rounding with probability

.

By the linear programming constraints,

.

Then the value of is minimized when all are equal and . Thus, the probability that is satisfied is

,

where the last inequality is due to the concaveness of the function of variable .

For any , it holds that . Therefore, by the linearity of expectation, the expected number of satisfied clauses by the randomized rounding, is at least

.

The inequality is due to the fact that are the optimal fractional solutions to the relaxed LP, thus are no worse than the optimal integral solutions.

Choose a better solution

For any instance of the Max-SAT, let be the expected number of satisfied clauses when each variable is independently set to TRUE with probability ; and let be the expected number of satisfied clauses when we use the linear programming followed by randomized rounding.

We will show that on any instance of the Max-SAT, one of the two algorithms is a -approximation algorithm.

Theorem
Proof.
It suffices to show that . Letting denote the set of clauses that contain literals, we know that

By the analysis of randomized rounding,

Thus

An easy calculation shows that for any , so that we have

Conditional Probability Method

{To be added}

MAX-Cut

Set Balancing

Lovász Local Lemma

Consider a set of "bad" events . Suppose that for all . We want to show that there is a situation that none of the bad events occurs. Due to the probabilistic method, we need to prove that

Case 1: mutually independent events.

If all the bad events are mutually independent, then

for any .

Case 2: arbitrarily dependent events.

On the other hand, if we put no assumption on the dependencies between the events, then by the union bound (which holds unconditionally),

which is not an interesting bound for . We cannot improve bound without further information regarding the dependencies between the events.


We would like to know what is going on between the two extreme cases: mutually independent events, and arbitrarily dependent events. The Lovász local lemma provides such a tool.

The local lemma is powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a dependency graph.

Definition
Let be a set of events. A graph on the set of vertices is called a dependency graph for the events if for each , , the event is mutually independent of all the events .
Example
Let be a set of mutually independent random variables. Each event is a predicate defined on a number of variables among . Let be the unique smallest set of variables which determine . The dependency graph is defined by
iff .

The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.

Lovász Local Lemma (symmetric case)
Let be a set of events, and assume that the following hold:
  1. for all , ;
  2. the maximum degree of the dependency graph for the events is , and
.
Then
.


Non-constructive Poof of LLL

We will prove a general version of the local lemma, where the events are not symmetric. This generalization is due to Spencer.

Lovász Local Lemma (general case)
Let be the dependency graph of events . Suppose there exist real numbers such that and for all ,
.
Then
.
Proof.

We can use the following probability identity to compute the probability of the intersection of events:

Lemma 1
.
Proof.

By definition of conditional probability,

,

so we have

.

The lemma is proved by recursively applying this equation.

Next we prove by induction on that for any set of events ,

.

The local lemma is a direct consequence of this by applying Lemma 1.

For , this is obvious. For general , let be the set of vertices adjacent to in the dependency graph. Clearly . And it holds that

,

which is due to the basic conditional probability identity

.

We bound the numerator

The equation is due to the independence between and .

The denominator can be expanded using Lemma 1 as

which by the induction hypothesis, is at least

where is the edge set of the dependency graph.

Therefore,

Applying Lemma 1,

To prove the symmetric case. Let for all . Note that .

If the following conditions are satisfied:

  1. for all , ;
  2. ;

then for all ,

.

Due to the local lemma for general cases, this implies that

.

This gives the symmetric version of local lemma.

Constructive Proof of LLL

We consider a restrictive case.

Let be a set of mutually independent random variables which assume boolean values. Each event is an AND of at most literals ( or ). Let be the set of the variables that depends on. The probability that none of the bad events occurs is

In this particular model, the dependency graph is defined as that iff .

Observe that is a clause (OR of literals). Thus, is a -CNF, the CNF that each clause depends on variables. The probability

means that the the -CNF is satisfiable.

The satisfiability of -CNF is a hard problem. In particular, 3SAT (the satisfiability of 3-CNF) is the first NP-complete problem (the Cook-Levin theorem). Given the current suspect on NP vs P, we do not expect to solve this problem generally.

However, the condition of the Lovasz local lemma has an extra assumption on the degree of dependency graph. In our model, this means that each clause shares variables with at most other clauses. We call a -CNF with this property a -CNF with bounded degree .

Therefore, proving the Lovasz local lemma on the restricted forms of events as described above, can be reduced to the following problem:

Problem
Find a condition on and , such that any -CNF with bounded degree is satisfiable.

In 2009, Moser comes up with the following procedure solving the problem. He later generalizes the procedure to general forms of events. This not only gives a beautiful constructive proof to the Lovasz local lemma, but also provides an efficient randomized algorithm for finding a satisfiable assignment for a number of events with bounded dependencies.

Let be a -CNF of clauses with bounded degree , defined on variables . The following procedure find a satisfiable assignment for .

Solve()
Pick a random assignment of .
While there is an unsatisfied clause in
Fix().

The sub-routine Fix is defined as follows:

Fix()
Replace the variables in with new random values.
While there is unsatisfied clause that
Fix().

The procedure looks very simple. It just recursively fixes the unsatisfied clauses by randomly replacing the assignment to the variables.

We then prove it works.

Number of top-level callings of Fix

In Solve(), the subroutine Fix() is called. We now upper bound the number of times it is called (not including the recursive calls).

Assume Fix() always terminates.

Observation
Every clause that was satisfied before Fix() was called will still remain satisfied and will also be satisfied after Fix() returns.

The observation can be proved by induction on the structure of recursion. Since there are clauses, Solve() makes at most calls to Fix.

We then prove that Fix() terminates.

Termination of Fix

The idea of the proof is to reconstruct a random string.

Suppose that during the running of Solve(), the Fix subroutine is called for times (including all the recursive calls).

Let be the sequence of the random bits used by Solve(). It is easy to see that the length of is , because the initial random assignment of variables takes bits, and each time of calling Fix takes bits.

We then reconstruct in an alternative way.

Recall that Solve() calls Fix() at top-level for at most times. Each calling of Fix() defines a recursion tree, rooted at clause , and each node corresponds to a clause (not necessarily distinct, since a clause might be fixed for several times). Therefore, the entire running history of Solve() can be described by at most recursion trees.

Observation 1
Fix a . The recursion trees which capture the total running history of Solve() can be encoded in bits.

Each root node corresponds to a clause. There are clauses in . The root nodes can be represented in bits.

The smart part is how to encode the branches of the tree. Note that Fix() will call Fix() only for the that shares variables with . For a k-CNF with bounded degree , each clause can share variables with at most many other clauses. Thus, each branch in the recursion tree can be represented in bits. There are extra bits needed to denote whether the recursion ends. So totally bits are sufficient to encode all recursion trees.

Observation 2
The random sequence can be encoded in bits.

With bits, the structure of all the recursion trees can be encoded. With extra bits, the final assignment of the variables is stored.

We then observe that with these information, the sequence of the random bits can be reconstructed from backwards from the final assignment.

The key step is that a clause is only fixed when it is unsatisfied (obvious), and an unsatisfied clause must have exact one assignment (a clause is OR of literals, thus has exact one unsatisfied assignment). Thus, each node in the recursion tree tells the random bits in the random sequence used in the call of the Fix corresponding to the node. Therefore, can be reconstructed from the final assignment plus at most recursion trees, which can be encoded in at most bits.

The following theorem lies in the heart of the Kolmogorov complexity. The theorem states that random sequence is incompressible.

Theorem (Kolmogorov)
For any encoding scheme , with high probability, a random sequence is encoded in at least bits.

Applying the theorem, we have that with high probability,

.

Therefore,

In order to bound , we need

,

which hold for for some constant . In fact, in this case, , the running time of the procedure is bounded by a polynomial!

Back to the local lemma

We showed that for , any -CNF with bounded degree is satisfiable, and the satisfied assignment can be found within polynomial time with high probability. Now we interprete this in a language of the local lemma.

Recall that the symmetric version of the local lemma:

Theorem (The local lemma: symmetric case)
Let be a set of events, and assume that the following hold:
  1. for all , ;
  2. the maximum degree of the dependency graph for the events is , and
.
Then
.

Suppose the underlying probability space is a number of mutually independent uniform random boolean variables, and the evens are clauses defined on variables. Then,

thus, the condition means that

which means the Moser's procedure is asymptotically optimal on the degree of dependency.