Randomized Algorithms (Spring 2010)/Approximate counting, linear programming

From EtoneWiki
Jump to: navigation, search

Counting Problems

Complexity model

Recall the class NP of decision problems (the problems with "yes" or "no" answers). Formally, denoting by the set of all boolean strings of any lengths, a decision problem is a function .

Definition (NP)
A function is in NP if there exist a polynomial and a poly-time algorithm with boolean output such that for every ,
.

Intuitively, the NP class contains all the decision problems such that the answer is "yes" if and only if there exists a certificate which can be verified in poly-time.

In many contexts, we are interested not just in the existence of certificate but actually in counting the number of certificates. This leads to the definition of the class #P (pronounced "sharp p"). Only now the output of the function is a natural number.

Definition (#P)
A function is in #P if there exist a polynomial and a poly-time algorithm with boolean output such that for every ,
.

You may notice the similarity between the two definitions. The difference is that now does not just indicates the existence of a certificate, but gives the actual number of certificates.

Why should we care about the number of "certificates" at all? In combinatorics, a counting problem is usually formulated as counting the number of combinatorial objects with some particular structure, such as the number of trees, or Latin squares. The #P class contains the counting problems that the "structure" requirements are easy to check.

Examples of problems in #P
#cycles: given as input a graph , count the number of cycles in .
#SAT: given as input a boolean formula , count the number of satisfying assignments for .

The second example shows that the #P may be harder than NP, because we know that SAT (given a boolean formular, decide the existence of satisfying assignments) is NP-complete, and its counting version #SAT now ask for the exact number of satisfying assignments.

The class FP contains all the functions computable by poly-time algorithms. The classes FP and #P are the analogs of P and NP for counting problems.

A reduction from a problem to a problem is a mapping which maps instances of to instances of such that for any instance of ,

.

In other words, "reduces" the task of solving to the task of solving . A problem is said to be poly-time reducible to the problem , if there exists a reduction from to and is poly-time computable.

A problem is #P-hard if every problem in #P is poly-time reducible to . A problem is #P-complete if #P and is #P-hard. That is to say, #P-complete problems are the "hardest" problems in #P.

Generally, #P-complete problems are very hard, because if we can solve any of these problems in poly-time, then #P=FP, which implies that NP=P. On the other hand, we do not know whether NP=P could imply that #P=FP.

FPRAS

Definition (FPRAS)
A polynomial randomized approximation scheme (PRAS) for a problem is a randomized algorithm that takes an input instance and a real number , and in time polynomial in returns such that
A fully polynomial randomized approximation scheme (FPRAS) is a PRAS whose running time is polynomially in both and .

The constant in the definition can be replaced by any constant in the range without changing the definition. In fact, it is usually more convenient to parameterize the FPRAS by the approximation error and the probability error.

Definition (-FPRAS)
An -FPRAS for a problem is an FPRAS with
running in time polynomial of , , and .

Approximate Counting

Let us consider the following abstract problem.

Let be a finite set of known size, and let . We want to compute the size of , namely .

We assume two devices:

  • A uniform sampler , which uniformly and independently samples a member of upon each calling.
  • A membership oracle of , denoted . Given as the input an , indicates whether or not is a member of .

Equipped by and , we can have the following Monte Carlo algorithm:

  • Choose independent samples from by the uniform sampler , represented by the random variables .
  • Let be the indicator random variable defined as , namely, indicates whether .
  • Define the estimator random variable

It is easy to see that and we might hope that with high probability the value of is close to . Formally, is called an -approximation of if

The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between and

Theorem (estimator theorem)
Let . Then the Monte Carlo method yields an -approximation to with probability at least provided
.
Proof.
Use the Chernoff bound.

A counting algorithm for the set has to deal with the following three issues:

  • Implement the membership oracle . This is usually straightforward, or assumed by the model.
  • Implement the uniform sampler . As we have seen, this is usually approximated by random walks. How to design the random walk and bound its mixing rate is usually technical challenging, if possible at all.
  • Deal with exponentially small . This requires us to cleverly choose the universe . Sometimes this needs some nontrivial ideas.

Counting DNFs

A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:

.

Note the difference from the conjunctive normal forms (CNF).

Given a DNF formular as the input, the problem is to count the number of satisfying assignments of . This problem is #P-complete.

Naively applying the Monte Carlo method will not give a good answer. Suppose that there are variables. Let be the set of all truth assignments of the variables. Let be the set of satisfying assignments for . The straightforward use of Monte Carlo method samples assignments from and check how many of them satisfy . This algorithm fails when is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.


The union of sets problem

We reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.

Let be a finite universe. We are given subsets . The following assumptions hold:

  • For all , is computable in poly-time.
  • It is possible to sample uniformly from each individual .
  • For any , it can be determined in poly-time whether .

The goal is to compute the size of .

DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula is defined on variables, and contains clauses , where clause has literals. Without loss of generality, we assume that in each clause, each variable appears at most once.

  • is the set of all assignments.
  • Each is the set of satisfying assignments for the -th clause of the DNF formular . Then the union of sets gives the set of satisfying assignments for .
  • Each clause is a conjunction (AND) of literals. It is not hard to see that , which is efficiently computable.
  • Sampling from an is simple: we just fix the assignments of the literals of that clause, and sample uniformly and independently the rest variable assignments.
  • For each assignment , it is easy to check whether it satisfies a clause , thus it is easy to determine whether .
The coverage algorithm

We now introduce the coverage algorithm for the union of sets problem.

Consider the multiset defined by

,

where denotes the multiset union. It is more convenient to define as the set

.

For each , there may be more than one instances of . We can choose a unique representative among the multiple instances for the same , by choosing the with the minimum , and form a set .

Formally, . Every corresponds to a unique where is the smallest among .

It is obvious that and

.

Therefore, estimation of is reduced to estimation of with . Then can have an -approximation with probability in poly-time, if we can uniformly sample from and is suitably small.

An uniform sample from can be implemented as follows:

  • generate an with probability ;
  • uniformly sample an , and return .

It is easy to see that this gives a uniform member of . The above sampling procedure is poly-time because each can be computed in poly-time, and sampling uniformly from each is poly-time.

We now only need to lower bound the ratio

.

We claim that

.

It is easy to see this, because each has at most instances of in , and we already know that .

Due to the estimator theorem, this needs uniform random samples from .

This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.

Permanents and perfect matchings

Let and . Consider a bipartite graph . An is a perfect matching of if every vertex of has exactly one edge in adjacent to it.

Given a bipartite graph with , we want to count the number of perfect matchings of . This problem can be reduced to computing the permanent of a square matrix.

Definition (permanent)
Let be an matrix. The permanent of the matrix is defined as
,
where is the symmetric group of permutation of size .

If we multiply each term of the sum the sign of the permutation, then it gives us the determinant of the matrix,

,

where , the sign of a permutation, is either or , according to whether the minimum number of pair-wise interchanges to achieve from is odd or even.

Unlike the determinants, which are computable in poly-time, permanents are hard to compute, as permanents can be used to count the number of perfect matchings in a bipartite graph, which is #P-complete.

A bipartite graph with can be represented by an matrix with 0-1 entries as follows:

  • Each row of corresponds to a vertex in and each column of corresponds to a vertex in .

Note the subtle difference between the definition of and the adjacency matrix.

Each perfect matching corresponds to a permutation of with for every , which corresponds to a permutation with . It is than easy to see that gives the number of perfect matchings in the bipartite graph .

It is known that counting the number of perfect matchings in a bipartite graph is #P-hard. Since this problem can be reduced to computing the permanent, thus the problem of computing the permanents is also #P-hard.

Now we show that with randomization, we can approximate the number of perfect matchings in a bipartite graph. In particular, we will give an FPRAS for counting the perfect matchings in a dense bipartite graph.

The Jerrum-Sinclair algorithm

Fix a bipartite graph with . Let be the set of matchings of size in , and let . Thus, is the set of perfect matchings in , and our goal is to compute .

Let for . Then , which gives us a recursion to compute the , as

,

where is the number of matchings of size 1 in the bipartite graph , which is just the number of edges in . Therefore, can be computed once we know for .

Each can be estimated by sampling uniformly from the set . The algorithm for estimating is outlined as:

  1. For each , have an FPRAS for computing the by uniform sampling sufficiently many members from as
  • uniformly sample matching from , for some polynomially large ;
  • assuming that there are sampled matchings of size , return as .
2. Compute as , where .

There are several issues that we have to deal with in order to have a fully functional FPRAS for counting perfect matchings.

  • By taking the product of 's, the errors for individual 's add up.
  • In order to accurately estimate by sampling from , the ratio should be within the range for some within polynomial of .
  • Implement the uniform sampling from .
Estimator for each

Assuming that is well-bounded (meaning that for some constant ), given a uniform sampler from the set , we can directly apply the Monte Carlo method to estimate the ratio , and prove the similar estimator theorem. See Exercise 11.7 of the textbook for details.

We need to guarantee that is well-bounded. It is known that this is true for dense bipartite graphs.

Theorem (Broder 1986)
For any bipartite graph with , and minimum degree at least ,
for all .
Sketch of the proof. The idea is to show that in a dense bipartite graph, any matching of size can be modified into a matching of size with at most two operations of rotation and augmentation, introduced in the last lecture.
Accumulation of errors

The Monte Carlo estimation of each individual leaves some bounded errors (there are two types of errors: the approximation error and the probabilistic error ). Within polynomial running time, we can make these errors sufficiently small, so that the accumulated error resulting from the product of ratios is bounded. See Exercise 11.8 in the textbook for details.

Near-uniform sampling from

In the last lecture, we have shown that by random walk, we can sample a near-uniform member of in poly-time. To sample from , we can add additional vertice and edges to the original bipartite graph and sample near-uniformly from of the modified bipartite graph, which yields a near-uniform sampler from of the original graph. The detailed method and analysis is in Section 11.3.4 and the Exercise 11.11 in the textbook.

Volume estimation

We consider the problem of computing the volume of a given convex body in dimensions.

We use to denote the volume of the convex body . Abstractly, the problem is that given as input a convex body in dimensions, return the .

We should be more specific about the input model. Since we allow an arbitrary convex body as input, it is not even clear how to describe the body. We assume that is described by means of a membership oracle , such that for a query of an -dimensional point , indicates whether .

For example, the -dimensional convex body defined by the intersection of half-spaces, which is the set of feasible solutions to a system of linear constraints, can be described as

,

for some matrix , and -dimensional vector . For a query of an , the membership oracle just check whether .

For deterministic algorithms, there are negative news for this problem.

Theorem (Bárány-Füredi 1987)
Suppose that a deterministic poly-time algorithm uses the membership oracle for a convex body in dimensions, and generates an upper bound and a lower bound on the volume of . Then, there is a convex body and a constant such that
.

That is said, with deterministic algorithms, we cannot even approximate the volume within a wildly loose range.

Dyer-Frieze-Kannan come up with an idea of estimating the volume of by sampling near-uniformly from convex sets. They reduce the problem of computing approximately the volume of convex bodies to this sampling problem and thus give the first FPRAS for the volume of convex bodies.

For any convex body , it encloses some -dimensional ball and is also enclosed by another ball with larger radius. We assume that the convex body encloses the unit ball around the origin, and is enclosed by a larger ball round the origin with polynomially large radius. Formally, we assume that

for some constant ,

where denotes a ball of radius with as center, i.e. . We can make this assumption because it was proved by Grötsel-Lovász-Schrijver that for any convex body , there exists a linear transformation which can be found within poly-time such that transforms the to a convex body that satisfies the assumption, and preserves the ratio of the volumes of the balls to the convex body.

The volumes of the balls are easy to compute. If only the ratio between the convex body and the ball which encloses it, is sufficiently large, then we can apply the Monte Carlo method to estimate by uniformly sampling from the ball.

However, in high-dimension, the ratio between the volume of a convex body and the volume of ball which encloses it, can be exponentially small. This is caused by the so called the "curse of dimensionality".

Instead of having an outer ball and an inner ball, we define a sequence of balls:

,

where and is the smallest positive integer such that . Therefore, the inner ball , the outer ball , and is within polynomial of . In fact, .

This sequence of balls naturally defines a sequence of convex bodies by intersections as . It is obvious that

.

Balls are convex, and since the intersection of convex bodies is still convex, the sequence of is a sequence of convex bodies.

We have the telescopic product:

Therefore, the volume can be computed as

.

The volume of unite ball can be precisely computed in poly-time. Each is computed by near-uniform sampling from , which encloses .

Another observation is that the ratio is well-bounded. Recall that where . It can be proved that in dimensions, the volume of is at most times the , thus the ratio is at most . By the estimator theorem, we can have an FPRAS for the ratio if we can uniformly sample from .


Uniformly sampling from an arbitrary convex body is replaced by near-uniform sampling achieved by random walks. In the original walk of Dyer-Frieze-Kannan, they consider the random walk over -dimensional discrete grid points enclosed by , and prove that the walk is rapid mixing. This gives us the first FPRAS for volume estimation which runs in , where ignores the polylogarithmic factors.

The time bound was later improved by a series of works, each introducing some new ideas.

complexity new ingredient(s)
Dyer-Frieze-Kannan 1991 everything
Lovász-Simonovits 1990 localization lemma
Applegate-Kannan 1990 logconcave sampling
Lovász 1990 ball walk
Dyer-Frieze 1991 better error analysis
Lovász-Simonovits 1993 many improvements
Kannan-Lovász-Simonovits 1997 isotropy, speedy walk
Lovász-Vempala 2003 simulated annealing, hit-and-run

(cited from "Geometric Random Walks: A Survey" by Santosh Vempala.)

The current best upper bound is due to Lovász and Vempala in 2003. It is conjectured that the optimal bound is .

Linear Programming

Given a function and a set , an optimization problem is the problem of finding an with the optimal (minimum) . Formally, the problem can be expressed as:

where is a vector of variables. For the problem of maximizing a function , we just minimizes instead.

We call the function the objective function and call any a feasible solution of the problem. A feasible solution that minimizes is called an optimal solution. Our task is to find an optimal solution.

The feasible set is usually given by a number of constraints , which are predicates defined on . An is a feasible solution if it satisfies all the constraints.

A linear programming (LP) problem is an optimization problem with a linear objective function subject to a number of linear constraints. Formally, an LP is a problem that can be expressed in the following form:

where , and are constants and are variables. For maximization problem, or constraints given by "", we can multiply the coefficients by and still write the LP in the above form.

We can describe the programming in the vector form. Let be a vector of variables. Let be an matrix of constant entries, be a vector of constant entries, and be a vector of constant entries. Then an LP is expressed as:

We call it the canonical form of linear programming.

The geometry of LPs

For -dimensional space , a linear constraint specifies a halfspace. The feasible set specified by constraints together is an intersection of halfspaces, thus, a convex polytope.

The convex polytope is defined by .

The LP can be thought as given an -dimensional polytope and a linear (affine) function , looking for a point in the polytope with the smallest function value .

We can choose a number of constraints in the system , and make them hold with equality. This would define a subspace of . In particular, if we choose linearly independent constraints to form an submatrix and the corresponding -dimensional vector , solving would give us exactly one point . If this is in the polytope, i.e. satisfies that , we call such a vertex of the polytope . In the context of LP, it is also called a basic feasible solution (bfs) of the LP.

A key observation for LP is that there is a basic feasible solution which is optimal, i.e. the optimal solution is a vertex of the polytope.

The simplex algorithms

The simplex algorithm by George Dantzig solves the linear programming by moving from vertex to vertex of the convex polytope, each time making some progress towards optimizing the objective function. Eventually the algorithm reaches a vertex which is a local optimum. In a convex set, a locally optimal point is also global optimal, thus the simplex algorithm returns an optimal solution in finite steps.

The problem with the simplex algorithm is that in some bad polytopes, it takes the simplex algorithm exponentially many steps to reach the optimum. The simplex algorithm is actually a class of algorithms defined by various pivoting rules, which describe how to move locally from one vertex to another. The original pivoting rule proposed by Dantzig has an exponentially large worst-case time complexity (though works very good in practice). To-date, it is still unknown whether there exists any deterministic simplex algorithm with sub-exponential worst-case complexity.

People have tried randomized pivoting rules, and there are some amazing progresses have been made:

  • Kalai 1992: there is a randomized simplex algorithm with sub-exponential time complexity.
  • Kelner-Spielman 2006: there is a polynomial time randomized simplex algorithm.

Although we do not know whether there exists deterministic poly-time simplex algorithm, we do know that LP can be solved by deterministic poly-time algorithms, i.e. LP is in P. The following two algorithms use different ideas than the simplex algorithm, and are both in poly-time:

  • The ellipsoid algorithm.
  • Interior point methods.

Although these algorithms guarantee polynomial time complexity in the worst-case, their performances are worse than the simplex algorithms. However, the ideas of these algorithms can be used to solve more general mathematical programmings, such as convex programmings.

An LP solver via random walks

We discuss a new algorithm for solving LP which is based on random sampling, introduced by Bertsimas and Vempala in 2004.

The problem of solving an LP can be reduced to that given a convex polytope , test whether the polytope is empty. We call this problem the "feasibility test".

Suppose we have an LP:

To see this LP can be solved by feasibility testing, we treat the for some parameter as an additional constraint, and define a new polytope . The smallest that the polytope is not empty is the optimal solution to the original LP. We can find this smallest by binary search if we can efficiently test the emptiness of a convex polytope.


We assume that the convex polytope is contained in the axis-aligned cube of width centered at the origin; further if is non-empty then it contains a cube of width . The parameter is equal to .

Algorithm: feasibility test

Input: an -dimensional convex polytope defined by the system .
Output: a point in the polytope, or a guarantee that is empty.

Let be the axis-aligned cube of side length and center .
Repeat for times do:
If , return .
Pick a constraint violated by , say that , and define the halfspace
.
Set . Uniformly sample random points from and let
.
Report is empty.

The number of samples required in each iteration, , where is the number of linear constraints.

It is easy to see that at any iteration, is a convex set, and for the current convex set .

The idea of the algorithm is based on computing the centroid of the convex set. It is know that if we could compute the exact centroid in each iteration, then the volume of drops by a constant factor in each iteration, thus reaches the smallest possible volume of a cube of width within polynomial number of iterations. But, finding the centroid of a convex polytope, is #P-hard.

The idea behind the algorithm is that an approximate centroid can be computed using random points and the volume of is drops by a constant factor with high probability in each iteration with this choice of .

The uniform sampling in is not easy. But since is also a convex polytope, we can approximate the uniform sampling by near-uniform sampling from by rapid mixing random walks. The random walks used here are similar to the ones used in the volume estimation of convex bodies. We could use the basic grid walk, or more advanced ball walk, or hit-and-run walk.

With rapid mixing random walks, the resulting LP solver runs in poly-time.

The details are omitted. You may check the following paper if interested:

  • Bertsimas and Vempala, “Solving convex programs by random walks.” JACM 2004.