Randomized Algorithms (Spring 2010)/Approximate counting, linear programming

From TCS Wiki
Revision as of 06:14, 27 May 2010 by imported>WikiSysop (→‎The Jerrum-Sinclair algorithm)
Jump to navigation Jump to search

Counting Problems

Complexity model

Recall the class NP of decision problems (the problems with "yes" or "no" answers). Formally, denoting by [math]\displaystyle{ \{0,1\}^* }[/math] the set of all boolean strings of any lengths, a decision problem is a function [math]\displaystyle{ f:\{0,1\}^*\rightarrow\{0,1\} }[/math].

Definition (NP)
A function [math]\displaystyle{ f:\{0,1\}^*\rightarrow\{0,1\} }[/math] is in NP if there exist a polynomial [math]\displaystyle{ p }[/math] and a poly-time algorithm [math]\displaystyle{ V }[/math] with boolean output such that for every [math]\displaystyle{ x\in\{0,1\}^* }[/math],
[math]\displaystyle{ f(x)=1 \Leftrightarrow \exists y\in\{0,1\}^{p(|x|)}, \mbox{such that } V(x,y)=1\, }[/math].

Intuitively, the NP class contains all the decision problems such that the answer is "yes" if and only if there exists a certificate which can be verified in poly-time.

In many contexts, we are interested not just in the existence of certificate but actually in counting the number of certificates. This leads to the definition of the class #P (pronounced "sharp p"). Only now the output of the function is a natural number.

Definition (#P)
A function [math]\displaystyle{ f:\{0,1\}^*\rightarrow\mathbb{N} }[/math] is in #P if there exist a polynomial [math]\displaystyle{ p }[/math] and a poly-time algorithm [math]\displaystyle{ V }[/math] with boolean output such that for every [math]\displaystyle{ x\in\{0,1\}^* }[/math],
[math]\displaystyle{ f(x)=\left|\left\{y\in\{0,1\}^{p(|x|)}\mid V(x,y)=1\right\}\right| }[/math].

You may notice the similarity between the two definitions. The difference is that now [math]\displaystyle{ f(x) }[/math] does not just indicates the existence of a certificate, but gives the actual number of certificates.

Why should we care about the number of "certificates" at all? In combinatorics, a counting problem is usually formulated as counting the number of combinatorial objects with some particular structure, such as the number of trees, or Latin squares. The #P class contains the counting problems that the "structure" requirements are easy to check.

Examples of problems in #P
#cycles: given as input a graph [math]\displaystyle{ G }[/math], count the number of cycles in [math]\displaystyle{ G }[/math].
#SAT: given as input a boolean formula [math]\displaystyle{ \phi }[/math], count the number of satisfying assignments for [math]\displaystyle{ \phi }[/math].

The second example shows that the #P may be harder than NP, because we know that SAT (given a boolean formular, decide the existence of satisfying assignments) is NP-complete, and its counting version #SAT now ask for the exact number of satisfying assignments.

The class FP contains all the functions [math]\displaystyle{ f:\{0,1\}^*\rightarrow\mathbb{N} }[/math] computable by poly-time algorithms. The classes FP and #P are the analogs of P and NP for counting problems.

A reduction from a problem [math]\displaystyle{ f }[/math] to a problem [math]\displaystyle{ g }[/math] is a mapping [math]\displaystyle{ \phi }[/math] which maps instances of [math]\displaystyle{ f }[/math] to instances of [math]\displaystyle{ g }[/math] such that for any instance [math]\displaystyle{ x }[/math] of [math]\displaystyle{ f }[/math],

[math]\displaystyle{ f(x)=g(\phi(x)) }[/math].

In other words, [math]\displaystyle{ \phi }[/math] "reduces" the task of solving [math]\displaystyle{ f }[/math] to the task of solving [math]\displaystyle{ g }[/math]. A problem [math]\displaystyle{ f }[/math] is said to be poly-time reducible to the problem [math]\displaystyle{ g }[/math], if there exists a reduction [math]\displaystyle{ \phi }[/math] from [math]\displaystyle{ f }[/math] to [math]\displaystyle{ g }[/math] and [math]\displaystyle{ \phi }[/math] is poly-time computable.

A problem [math]\displaystyle{ f }[/math] is #P-hard if every problem in #P is poly-time reducible to [math]\displaystyle{ f }[/math]. A problem [math]\displaystyle{ f }[/math] is #P-complete if [math]\displaystyle{ f\in }[/math]#P and [math]\displaystyle{ f }[/math] is #P-hard. That is to say, #P-complete problems are the "hardest" problems in #P.

Generally, #P-complete problems are very hard, because if we can solve any of these problems in poly-time, then #P=FP, which implies that NP=P. On the other hand, we do not know whether NP=P could imply that #P=FP.

FPRAS

Definition (FPRAS)
A polynomial randomized approximation scheme (PRAS) for a problem [math]\displaystyle{ f:\{0,1\}^*\rightarrow\mathbb{R} }[/math] is a randomized algorithm [math]\displaystyle{ A }[/math] that takes an input instance [math]\displaystyle{ x }[/math] and a real number [math]\displaystyle{ \epsilon\gt 0 }[/math], and in time polynomial in [math]\displaystyle{ n=|x| }[/math] returns [math]\displaystyle{ A(x) }[/math] such that
[math]\displaystyle{ \Pr[(1-\epsilon)f(x)\le A(x)\le (1+\epsilon)f(x)]\ge\frac{3}{4}. }[/math]
A fully polynomial randomized approximation scheme (FPRAS) is a PRAS whose running time is polynomially in both [math]\displaystyle{ n }[/math] and [math]\displaystyle{ 1/\epsilon }[/math].

The constant [math]\displaystyle{ \frac{3}{4} }[/math] in the definition can be replaced by any constant in the range [math]\displaystyle{ \left(\frac{1}{2},1\right) }[/math] without changing the definition. In fact, it is usually more convenient to parameterize the FPRAS by the approximation error and the probability error.

Definition ([math]\displaystyle{ (\epsilon,\delta) }[/math]-FPRAS)
An [math]\displaystyle{ (\epsilon,\delta) }[/math]-FPRAS for a problem [math]\displaystyle{ f:\{0,1\}^*\rightarrow\mathbb{R} }[/math] is an FPRAS with
[math]\displaystyle{ \Pr[(1-\epsilon)f(x)\le A(x)\le (1+\epsilon)f(x)]\ge 1-\delta, }[/math]
running in time polynomial of [math]\displaystyle{ n }[/math], [math]\displaystyle{ \frac{1}{\epsilon} }[/math], and [math]\displaystyle{ \log\frac{1}{\delta} }[/math].

Approximate Counting

Let us consider the following abstract problem.

Let [math]\displaystyle{ U }[/math] be a finite set of known size, and let [math]\displaystyle{ G\subseteq U }[/math]. We want to compute the size of [math]\displaystyle{ G }[/math], namely [math]\displaystyle{ |G| }[/math].

We assume two devices:

  • A uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], which uniformly and independently samples a member of [math]\displaystyle{ U }[/math] upon each calling.
  • A membership oracle of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \mathcal{O} }[/math]. Given as the input an [math]\displaystyle{ x\in U }[/math], [math]\displaystyle{ \mathcal{O}(x) }[/math] indicates whether or not [math]\displaystyle{ x }[/math] is a member of [math]\displaystyle{ G }[/math].

Equipped by [math]\displaystyle{ \mathcal{U} }[/math] and [math]\displaystyle{ \mathcal{O} }[/math], we can have the following Monte Carlo algorithm:

  • Choose [math]\displaystyle{ N }[/math] independent samples from [math]\displaystyle{ U }[/math] by the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], represented by the random variables [math]\displaystyle{ X_1,X_2,\ldots, X_N }[/math].
  • Let [math]\displaystyle{ Y_i }[/math] be the indicator random variable defined as [math]\displaystyle{ Y_i=\mathcal{O}(X_i) }[/math], namely, [math]\displaystyle{ Y_i }[/math] indicates whether [math]\displaystyle{ X_i\in G }[/math].
  • Define the estimator random variable
[math]\displaystyle{ Z=\frac{|U|}{N}\sum_{i=1}^N Y_i. }[/math]

It is easy to see that [math]\displaystyle{ \mathbf{E}[Z]=|G| }[/math] and we might hope that with high probability the value of [math]\displaystyle{ Z }[/math] is close to [math]\displaystyle{ |G| }[/math]. Formally, [math]\displaystyle{ Z }[/math] is called an [math]\displaystyle{ \epsilon }[/math]-approximation of [math]\displaystyle{ |G| }[/math] if

[math]\displaystyle{ (1-\epsilon)|G|\le Z\le (1+\epsilon)|G|. }[/math]

The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between [math]\displaystyle{ |G| }[/math] and [math]\displaystyle{ |U| }[/math]

Theorem (estimator theorem)
Let [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. Then the Monte Carlo method yields an [math]\displaystyle{ \epsilon }[/math]-approximation to [math]\displaystyle{ |G| }[/math] with probability at least [math]\displaystyle{ 1-\delta }[/math] provided
[math]\displaystyle{ N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta} }[/math].

Proof: Use the Chernoff bound.

[math]\displaystyle{ \square }[/math]

A counting algorithm for the set [math]\displaystyle{ G }[/math] has to deal with the following three complications:

  • Implement the membership oracle [math]\displaystyle{ \mathcal{O} }[/math]. This is usually straightforward, or assumed by the model.
  • Implement the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math]. As we have seen, this is usually approximated by random walks. How to design the random walk and bound its mixing rate is usually technical challenging, if possible at all.
  • Deal with exponentially small [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. This requires us to cleverly choose the universe [math]\displaystyle{ U }[/math]. Sometimes this needs some nontrivial ideas.

Counting DNFs

A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:

[math]\displaystyle{ (x_1\wedge \overline{x_2}\wedge x_3)\vee(x_2\wedge x_4)\vee(\overline{x_1}\wedge x_3\wedge x_4) }[/math].

Note the difference from the conjunctive normal forms (CNF).

Given a DNF formular [math]\displaystyle{ \phi }[/math] as the input, the problem is to count the number of satisfying assignments of [math]\displaystyle{ \phi }[/math]. This problem is #P-complete.

Naively applying the Monte Carlo method will not give a good answer. Suppose that there are [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ U=\{\mathrm{true},\mathrm{false}\}^n }[/math] be the set of all truth assignments of the [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ G=\{x\in U\mid \phi(x)=\mathrm{true}\} }[/math] be the set of satisfying assignments for [math]\displaystyle{ \phi }[/math]. The straightforward use of Monte Carlo method samples [math]\displaystyle{ N }[/math] assignments from [math]\displaystyle{ U }[/math] and check how many of them satisfy [math]\displaystyle{ \phi }[/math]. This algorithm fails when [math]\displaystyle{ |G|/|U| }[/math] is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.


The union of sets problem

We reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.

Let [math]\displaystyle{ V }[/math] be a finite universe. We are given [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ H_1,H_2,\ldots,H_m\subseteq V }[/math]. The following assumptions hold:

  • For all [math]\displaystyle{ i }[/math], [math]\displaystyle{ |H_i| }[/math] is computable in poly-time.
  • It is possible to sample uniformly from each individual [math]\displaystyle{ H_i }[/math].
  • For any [math]\displaystyle{ x\in V }[/math], it can be determined in poly-time whether [math]\displaystyle{ x\in H_i }[/math].

The goal is to compute the size of [math]\displaystyle{ H=\bigcup_{i=1}^m H_i }[/math].

DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula [math]\displaystyle{ \phi }[/math] is defined on [math]\displaystyle{ n }[/math] variables, and [math]\displaystyle{ \phi }[/math] contains [math]\displaystyle{ m }[/math] clauses [math]\displaystyle{ C_1,C_2,\ldots,C_m }[/math], where clause [math]\displaystyle{ C_i }[/math] has [math]\displaystyle{ k_i }[/math] literals. Without loss of generality, we assume that in each clause, each variable appears at most once.

  • [math]\displaystyle{ V }[/math] is the set of all assignments.
  • Each [math]\displaystyle{ H_i }[/math] is the set of satisfying assignments for the [math]\displaystyle{ i }[/math]-th clause [math]\displaystyle{ C_i }[/math] of the DNF formular [math]\displaystyle{ \phi }[/math]. Then the union of sets [math]\displaystyle{ H=\bigcup_i H_i }[/math] gives the set of satisfying assignments for [math]\displaystyle{ \phi }[/math].
  • Each clause [math]\displaystyle{ C_i }[/math] is a conjunction (AND) of literals. It is not hard to see that [math]\displaystyle{ |H_i|=2^{n-k_i} }[/math], which is efficiently computable.
  • Sampling from an [math]\displaystyle{ H_i }[/math] is simple: we just fix the assignments of the [math]\displaystyle{ k_i }[/math] literals of that clause, and sample uniformly and independently the rest [math]\displaystyle{ (n-k_i) }[/math] variable assignments.
  • For each assignment [math]\displaystyle{ x }[/math], it is easy to check whether it satisfies a clause [math]\displaystyle{ C_i }[/math], thus it is easy to determine whether [math]\displaystyle{ x\in H_i }[/math].
The coverage algorithm

We now introduce the coverage algorithm for the union of sets problem.

Consider the multiset [math]\displaystyle{ U }[/math] defined by

[math]\displaystyle{ U=H_1\uplus H_2\uplus\cdots \uplus H_m }[/math],

where [math]\displaystyle{ \uplus }[/math] denotes the multiset union. It is more convenient to define [math]\displaystyle{ U }[/math] as the set

[math]\displaystyle{ U=\{(x,i)\mid x\in H_i\} }[/math].

For each [math]\displaystyle{ x\in H }[/math], there may be more than one instances of [math]\displaystyle{ (x,i)\in U }[/math]. We can choose a unique representative among the multiple instances [math]\displaystyle{ (x,i)\in U }[/math] for the same [math]\displaystyle{ x\in H }[/math], by choosing the [math]\displaystyle{ (x,i) }[/math] with the minimum [math]\displaystyle{ i }[/math], and form a set [math]\displaystyle{ G }[/math].

Formally, [math]\displaystyle{ G=\{(x,i)\in U\mid \forall (x,j)\in U, j\le i\} }[/math]. Every [math]\displaystyle{ x\in H }[/math] corresponds to a unique [math]\displaystyle{ (x,i)\in G }[/math] where [math]\displaystyle{ i }[/math] is the smallest among [math]\displaystyle{ x\in H_i }[/math].

It is obvious that [math]\displaystyle{ G\subseteq U }[/math] and

[math]\displaystyle{ |G|=|H| }[/math].

Therefore, estimation of [math]\displaystyle{ |H| }[/math] is reduced to estimation of [math]\displaystyle{ |G| }[/math] with [math]\displaystyle{ G\subseteq U }[/math]. Then [math]\displaystyle{ |G| }[/math] can have an [math]\displaystyle{ \epsilon }[/math]-approximation with probability [math]\displaystyle{ (1-\delta) }[/math] in poly-time, if we can uniformly sample from [math]\displaystyle{ U }[/math] and [math]\displaystyle{ |G|/|U| }[/math] is suitably small.

An uniform sample from [math]\displaystyle{ U }[/math] can be implemented as follows:

  • generate an [math]\displaystyle{ i\in\{1,2,\ldots,m\} }[/math] with probability [math]\displaystyle{ \frac{|H_i|}{\sum_{i=1}^m|H_i|} }[/math];
  • uniformly sample an [math]\displaystyle{ x\in H_i }[/math], and return [math]\displaystyle{ (x,i) }[/math].

It is easy to see that this gives a uniform member of [math]\displaystyle{ U }[/math]. The above sampling procedure is poly-time because each [math]\displaystyle{ |H_i| }[/math] can be computed in poly-time, and sampling uniformly from each [math]\displaystyle{ H_i }[/math] is poly-time.

We now only need to lower bound the ratio

[math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math].

We claim that

[math]\displaystyle{ \alpha\ge\frac{1}{m} }[/math].

It is easy to see this, because each [math]\displaystyle{ x\in H }[/math] has at most [math]\displaystyle{ m }[/math] instances of [math]\displaystyle{ (x,i) }[/math] in [math]\displaystyle{ U }[/math], and we already know that [math]\displaystyle{ |G|=|H| }[/math].

Due to the estimator theorem, this needs [math]\displaystyle{ \frac{4m}{\epsilon^2}\ln\frac{2}{\delta} }[/math] uniform random samples from [math]\displaystyle{ U }[/math].

This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.

Permanents and perfect matchings

Let [math]\displaystyle{ U=\{u_1,u_2,\ldots,u_n\} }[/math] and [math]\displaystyle{ V=\{v_1,v_2,\ldots,v_n\} }[/math]. Consider a bipartite graph [math]\displaystyle{ G(U,V,E) }[/math]. An [math]\displaystyle{ M\subseteq E }[/math] is a perfect matching of [math]\displaystyle{ G }[/math] if every vertex of [math]\displaystyle{ G }[/math] has exactly one edge in [math]\displaystyle{ M }[/math] adjacent to it.

Given a bipartite graph [math]\displaystyle{ G(U,V,E) }[/math] with [math]\displaystyle{ |U|=|V|=n }[/math], we want to count the number of perfect matchings of [math]\displaystyle{ G }[/math]. This problem can be reduced to computing the permanent of a square matrix.

Definition (permanent)
Let [math]\displaystyle{ Q }[/math] be an [math]\displaystyle{ n\times n }[/math] matrix. The permanent of the matrix is defined as
[math]\displaystyle{ \mathrm{per}(Q)=\sum_{\pi\in\mathbb{S}_n}\prod_{i=1}^n Q_{i,\pi(i)} }[/math],
where [math]\displaystyle{ \mathbb{S}_n }[/math] is the symmetric group of permutation of size [math]\displaystyle{ n }[/math].

If we multiply each term of the sum the sign of the permutation, then it gives us the determinant of the matrix,

[math]\displaystyle{ \det(Q)=\sum_{\pi\in\mathbb{S}_n}\sgn(\pi)\prod_{i=1}^n Q_{i,\pi(i)} }[/math],

where [math]\displaystyle{ \sgn(\pi) }[/math], the sign of a permutation, is either [math]\displaystyle{ -1 }[/math] or [math]\displaystyle{ +1 }[/math], according to whether the minimum number of pair-wise interchanges to achieve [math]\displaystyle{ \pi }[/math] from [math]\displaystyle{ (1,2,\ldots,n) }[/math] is odd or even.

Unlike the determinants, which are computable in poly-time, permanents are hard to compute, as permanents can be used to count the number of perfect matchings in a bipartite graph, which is #P-complete.

A bipartite graph [math]\displaystyle{ G(U,V,E) }[/math] with [math]\displaystyle{ |U|=|V|=n }[/math] can be represented by an [math]\displaystyle{ n\times n }[/math] matrix [math]\displaystyle{ Q }[/math] with 0-1 entries as follows:

  • Each row of [math]\displaystyle{ Q }[/math] corresponds to a vertex in [math]\displaystyle{ U }[/math] and each column of [math]\displaystyle{ Q }[/math] corresponds to a vertex in [math]\displaystyle{ V }[/math].
[math]\displaystyle{ Q_{ij}=\begin{cases} 1 & \mbox{if }i\sim j,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

Note the subtle difference between the definition of [math]\displaystyle{ Q }[/math] and the adjacency matrix.

Each perfect matching corresponds to a permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ U }[/math] with [math]\displaystyle{ (u,\pi(u))\in E }[/math] for every [math]\displaystyle{ u\in U }[/math], which corresponds to a permutation [math]\displaystyle{ \pi\in\mathbb{S}_n }[/math] with [math]\displaystyle{ \prod_{i=1}^n Q_{i,\pi(i)}=1 }[/math]. It is than easy to see that [math]\displaystyle{ \mathrm{per}(Q) }[/math] gives the number of perfect matchings in the bipartite graph [math]\displaystyle{ G }[/math].

It is known that counting the number of perfect matchings in a bipartite graph is #P-hard. Since this problem can be reduced to computing the permanent, thus the problem of computing the permanents is also #P-hard.

Now we show that with randomization, we can approximate the number of perfect matchings in a bipartite graph. In particular, we will give an FPRAS for counting the perfect matchings in a dense bipartite graph.

The Jerrum-Sinclair algorithm

Fix a bipartite graph [math]\displaystyle{ G(U,V,E) }[/math] with [math]\displaystyle{ |U|=|V|=n }[/math]. Let [math]\displaystyle{ \mathcal{M}_k }[/math] be the set of matchings of size [math]\displaystyle{ k }[/math] in [math]\displaystyle{ G }[/math], and let [math]\displaystyle{ m_k=|\mathcal{M}_k| }[/math]. Thus, [math]\displaystyle{ \mathcal{M}_k }[/math] is the set of perfect matchings in [math]\displaystyle{ G }[/math], and our goal is to compute [math]\displaystyle{ m_n }[/math].

Let [math]\displaystyle{ r_k=\frac{m_k}{m_{k-1}} }[/math] for [math]\displaystyle{ 1\lt k\le n }[/math]. Then [math]\displaystyle{ m_k=m_{k-1}r_k }[/math], which gives us a recursion to compute the [math]\displaystyle{ m_n }[/math], as

[math]\displaystyle{ m_n=m_{1}\frac{m_2}{m_1}\cdot\frac{m_3}{m_2}\cdots\frac{m_n}{m_{n-1}}=m_1\prod_{k=2}^n r_k }[/math],

where [math]\displaystyle{ m_1=|\mathcal{M}_1| }[/math] is the number of matchings of size 1 in the bipartite graph [math]\displaystyle{ G }[/math], which is just the number of edges in [math]\displaystyle{ G }[/math]. Therefore, [math]\displaystyle{ m_n }[/math] can be computed once we know [math]\displaystyle{ r_k }[/math] for [math]\displaystyle{ 1\lt k\le n }[/math].

Each [math]\displaystyle{ r_k }[/math] can be estimated by sampling uniformly from the set [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math]. The algorithm for estimating [math]\displaystyle{ m_n }[/math] is outlined as:

  1. For each [math]\displaystyle{ 1\lt k\le n }[/math], have an FPRAS for computing the [math]\displaystyle{ r_k }[/math] by uniform sampling sufficiently many members from [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math] as
  • uniformly sample [math]\displaystyle{ N }[/math] matching from [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math], for some polynomially large [math]\displaystyle{ N }[/math];
  • assuming that there are [math]\displaystyle{ X }[/math] sampled matchings of size [math]\displaystyle{ k }[/math], return [math]\displaystyle{ r_k }[/math] as [math]\displaystyle{ r_k=\frac{X}{N-X} }[/math].
2. Compute [math]\displaystyle{ m_n }[/math] as [math]\displaystyle{ m_n=m_1\prod_{k=2}^n r_k }[/math], where [math]\displaystyle{ m_1=|E| }[/math].

There are several issues that we have to deal with in order to have a fully functional FPRAS for counting perfect matchings.

  • By taking the product of [math]\displaystyle{ r_k }[/math]'s, the errors for individual [math]\displaystyle{ r_k }[/math]'s add up.
  • In order to accurately estimate [math]\displaystyle{ r_k }[/math] by sampling from [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math], the ratio [math]\displaystyle{ r_k }[/math] should be within the range [math]\displaystyle{ \left[\frac{1}{\alpha},\alpha\right] }[/math] for some [math]\displaystyle{ \alpha }[/math] within polynomial of [math]\displaystyle{ n }[/math].
  • Implement the uniform sampling from [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math].
Estimator for each [math]\displaystyle{ r_k }[/math]

Assuming that [math]\displaystyle{ r_k=m_{k}/m_{k-1} }[/math] is well-bounded (meaning that [math]\displaystyle{ n^{-c}\le r_k\le n^c }[/math] for some constant [math]\displaystyle{ c }[/math]), given a uniform sampler from the set [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math], we can directly apply the Monte Carlo method to estimate the ratio [math]\displaystyle{ r_k }[/math], and prove the similar estimator theorem. See Exercise 11.7 of the textbook for details.

We need to guarantee that [math]\displaystyle{ r_k }[/math] is well-bounded. It is known that this is true for dense bipartite graphs.

Theorem (Broder 1986)
For any bipartite graph [math]\displaystyle{ G(V_1,V_2,E) }[/math] with [math]\displaystyle{ |V_1|=|V_2|=n }[/math], and minimum degree at least [math]\displaystyle{ n/2 }[/math],
[math]\displaystyle{ \Omega(n^2)=r_k=O(n^2) }[/math] for all [math]\displaystyle{ k }[/math].

Sketch of the proof: The idea is to show that in a dense bipartite graph, any matching of size [math]\displaystyle{ k-1 }[/math] can be modified into a matching of size [math]\displaystyle{ k }[/math] with at most two operations of rotation and augmentation, introduced in the last lecture.

[math]\displaystyle{ \square }[/math]

Accumulation of errors

The Monte Carlo estimation of each individual [math]\displaystyle{ r_k }[/math] leaves some bounded errors (there are two types of errors: the approximation error [math]\displaystyle{ \epsilon }[/math] and the probabilistic error [math]\displaystyle{ \delta }[/math]). Within polynomial running time, we can make these errors sufficiently small, so that the accumulated error resulting from the product of [math]\displaystyle{ n }[/math] ratios is bounded. See Exercise 11.8 for details.

Near-uniform sampling from [math]\displaystyle{ \mathcal{M}_k\cup\mathcal{M}_{k-1} }[/math]

In the last lecture, we have shown that by random walk, we can sample a near-uniform member of [math]\displaystyle{ \mathcal{M}_n\cup\mathcal{M}_{n-1} }[/math] in poly-time.

Volume estimation

We consider the problem of computing the volume of a given convex body [math]\displaystyle{ K }[/math] in [math]\displaystyle{ n }[/math] dimensions.

We use [math]\displaystyle{ \Upsilon(K)\, }[/math] to denote the volume of the convex body [math]\displaystyle{ K }[/math]. Abstractly, the problem is that given as input a convex body [math]\displaystyle{ K }[/math] in [math]\displaystyle{ n }[/math] dimensions, return the [math]\displaystyle{ \Upsilon(K)\, }[/math].

We should be more specific about the input model. Since we allow an arbitrary convex body as input, it is not even clear how to describe the body. We assume that [math]\displaystyle{ K }[/math] is described by means of a membership oracle [math]\displaystyle{ \mathcal{O} }[/math], such that for a query of an [math]\displaystyle{ n }[/math]-dimensional point [math]\displaystyle{ x }[/math], [math]\displaystyle{ \mathcal{O}(x) }[/math] indicates whether [math]\displaystyle{ x\in K }[/math].

For example, the [math]\displaystyle{ n }[/math]-dimensional convex body defined by the intersection of [math]\displaystyle{ m }[/math] half-spaces, which is the set of feasible solutions to a system of [math]\displaystyle{ m }[/math] linear constraints, can be described as

[math]\displaystyle{ A x\le \boldsymbol{b} }[/math],

for some [math]\displaystyle{ m\times n }[/math] matrix [math]\displaystyle{ A }[/math], and [math]\displaystyle{ m }[/math]-dimensional vector [math]\displaystyle{ b }[/math]. For a query of an [math]\displaystyle{ x }[/math], the membership oracle [math]\displaystyle{ \mathcal{O} }[/math] just check whether [math]\displaystyle{ Ax\le b }[/math].

For deterministic algorithms, there are negative news for this problem.

Theorem (Bárány-Füredi 1987)
Suppose that a deterministic poly-time algorithm uses the membership oracle for a convex body [math]\displaystyle{ K }[/math] in [math]\displaystyle{ n }[/math] dimensions, and generates an upper bound [math]\displaystyle{ \Upsilon_u\, }[/math] and a lower bound [math]\displaystyle{ \Upsilon_\ell\, }[/math] on the volume [math]\displaystyle{ \Upsilon(K)\, }[/math] of [math]\displaystyle{ K }[/math]. Then, there is a convex body [math]\displaystyle{ K }[/math] and a constant [math]\displaystyle{ c }[/math] such that
[math]\displaystyle{ \frac{\Upsilon_u}{\Upsilon_\ell}\ge c\left(\frac{n}{\log n}\right)^n }[/math].

That is said, with deterministic algorithms, we cannot even approximate the volume within a wildly loose range.

Dyer-Frieze-Kannan come up with an idea of estimating the volume of [math]\displaystyle{ K }[/math] by sampling near-uniformly from convex sets. They reduce the problem of computing approximately the volume of convex bodies to this sampling problem and thus give the first FPRAS for the volume of convex bodies.

For any convex body [math]\displaystyle{ K }[/math], it encloses some [math]\displaystyle{ n }[/math]-dimensional ball and is also enclosed by another ball with larger radius. We assume that the convex body [math]\displaystyle{ K }[/math] encloses the unit ball around the origin, and is enclosed by a larger ball round the origin with polynomially large radius. Formally, we assume that

[math]\displaystyle{ B(0,1)\subseteq K\subseteq B(0,n^c) }[/math] for some constant [math]\displaystyle{ c }[/math],

where [math]\displaystyle{ B(p,r) }[/math] denotes a ball of radius [math]\displaystyle{ r }[/math] with [math]\displaystyle{ p }[/math] as center, i.e. [math]\displaystyle{ B(p,r)=\{x\mid \|x-p\|\le r \} }[/math]. We can make this assumption because it was proved by Grötsel-Lovász-Schrijver that for any convex body [math]\displaystyle{ K }[/math], there exists a linear transformation [math]\displaystyle{ \tau }[/math] which can be found within poly-time such that [math]\displaystyle{ \tau K }[/math] transforms the [math]\displaystyle{ K }[/math] to a convex body that satisfies the assumption, and preserves the ratio of the volumes of the balls to the convex body.

The volumes of the balls are easy to compute. If only the ratio between the convex body and the ball which encloses it, is sufficiently large, then we can apply the Monte Carlo method to estimate [math]\displaystyle{ \Upsilon(K)\, }[/math] by uniformly sampling from the ball.

However, in high-dimension, the ratio between the volume of a convex body and the volume of ball which encloses it, can be exponentially small. This is caused by the so called the "curse of dimensionality".

Instead of having an outer ball and an inner ball, we define a sequence of balls:

[math]\displaystyle{ B_0=B(0,\lambda^0), B_1=B(0,\lambda^1), B_2=B(0,\lambda^2),\ldots, B_m=B(0,\lambda^m) }[/math],

where [math]\displaystyle{ \lambda=(1+\frac{1}{n}) }[/math] and [math]\displaystyle{ m }[/math] is the smallest positive integer such that [math]\displaystyle{ \lambda^m\ge n^c }[/math]. Therefore, the inner ball [math]\displaystyle{ B(0,1)=B_0 }[/math], the outer ball [math]\displaystyle{ B(0,n^c)\subseteq B_m }[/math], and [math]\displaystyle{ m }[/math] is within polynomial of [math]\displaystyle{ n }[/math]. In fact, [math]\displaystyle{ m\approx cn\ln n }[/math].

This sequence of balls naturally defines a sequence of convex bodies by intersections as [math]\displaystyle{ K_i=B_i\cap K }[/math]. It is obvious that

[math]\displaystyle{ B(0,1)=K_0\subseteq K_1\subseteq K_2\subseteq\cdots\subseteq K_m=K }[/math].

Balls are convex, and since the intersection of convex bodies is still convex, the sequence of [math]\displaystyle{ K_i }[/math] is a sequence of convex bodies.

We have the telescopic product:

[math]\displaystyle{ \frac{\Upsilon(K_0)}{\Upsilon(K_1)}\cdot\frac{\Upsilon(K_1)}{\Upsilon(K_2)}\cdots\frac{\Upsilon(K_{m-1})}{\Upsilon(K_m)}=\frac{\Upsilon(K_0)}{\Upsilon(K_m)}=\frac{\Upsilon(B(0,1))}{\Upsilon(K)}. }[/math]

Therefore, the volume [math]\displaystyle{ \Upsilon(K)\, }[/math] can be computed as

[math]\displaystyle{ \Upsilon(K)=\Upsilon(B(0,1))\cdot\prod_{i=1}^{m}\frac{\Upsilon(K_{i})}{\Upsilon(K_{i-1})} }[/math].

The volume of unite ball [math]\displaystyle{ \Upsilon(B(0,1))\, }[/math] can be precisely computed in poly-time. Each [math]\displaystyle{ \frac{\Upsilon(K_{i})}{\Upsilon(K_{i-1})} }[/math] is computed by near-uniform sampling from [math]\displaystyle{ K_{i} }[/math], which encloses [math]\displaystyle{ K_{i-1} }[/math].

Another observation is that the ratio [math]\displaystyle{ \frac{\Upsilon(K_{i})}{\Upsilon(K_{i-1})} }[/math] is well-bounded. Recall that [math]\displaystyle{ K_i=B(0,\lambda^i)\cap K }[/math] where [math]\displaystyle{ \lambda=(1+\frac{1}{n}) }[/math]. It can be proved that in [math]\displaystyle{ n }[/math] dimensions, the volume of [math]\displaystyle{ K_i }[/math] is at most [math]\displaystyle{ \lambda^n }[/math] times the [math]\displaystyle{ K_{i-1} }[/math], thus the ratio [math]\displaystyle{ \frac{\Upsilon(K_{i})}{\Upsilon(K_{i-1})} }[/math] is at most [math]\displaystyle{ \lambda^n=O(1) }[/math]. By the estimator theorem, we can have an FPRAS for the ratio [math]\displaystyle{ \frac{\Upsilon(K_{i})}{\Upsilon(K_{i-1})} }[/math] if we can uniformly sample from [math]\displaystyle{ K_i }[/math].


Uniformly sampling from an arbitrary convex body is replaced by near-uniform sampling achieved by random walks. In the original walk of Dyer-Frieze-Kannan, they consider the random walk over [math]\displaystyle{ n }[/math]-dimensional discrete grid points enclosed by [math]\displaystyle{ K }[/math], and prove that the walk is rapid mixing. This gives us the first FPRAS for volume estimation which runs in [math]\displaystyle{ \tilde{O}(n^{23}) }[/math], where [math]\displaystyle{ \tilde{O}(\cdot) }[/math] ignores the polylogarithmic factors.

The time bound was later improved by a series of works, each introducing some new ideas.

complexity new ingredient(s)
Dyer-Frieze-Kannan 1991 [math]\displaystyle{ n^{23} }[/math] everything
Lovász-Simonovits 1990 [math]\displaystyle{ n^{16} }[/math] localization lemma
Applegate-Kannan 1990 [math]\displaystyle{ n^{10} }[/math] logconcave sampling
Lovász 1990 [math]\displaystyle{ n^{10} }[/math] ball walk
Dyer-Frieze 1991 [math]\displaystyle{ n^8 }[/math] better error analysis
Lovász-Simonovits 1993 [math]\displaystyle{ n^7 }[/math] many improvements
Kannan-Lovász-Simonovits 1997 [math]\displaystyle{ n^5 }[/math] isotropy, speedy walk
Lovász-Vempala 2003 [math]\displaystyle{ n^4 }[/math] simulated annealing, hit-and-run

(cited from "Geometric Random Walks: A Survey" by Santosh Vempala.)

The current best upper bound is [math]\displaystyle{ \tilde{O}(n^4) }[/math] due to Lovász and Vempala in 2003. It is conjectured that the optimal bound is [math]\displaystyle{ \Theta(n^3) }[/math].

Linear Programming

Given a function [math]\displaystyle{ f:\mathbb{R}^n\rightarrow\mathbb{R} }[/math] and a set [math]\displaystyle{ \mathcal{F}\subseteq\mathbb{R}^n }[/math], an optimization problem is the problem of finding an [math]\displaystyle{ x\in\mathcal{F} }[/math] with the optimal (minimum) [math]\displaystyle{ f(x) }[/math]. Formally, the problem can be expressed as:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad f(x)\\ \mbox{subject to} &\quad x\in\mathcal{F} \end{align} }[/math]

where [math]\displaystyle{ x\in\mathcal{R}^n }[/math] is a vector of [math]\displaystyle{ n }[/math] variables. For the problem of maximizing a function [math]\displaystyle{ f }[/math], we just minimizes [math]\displaystyle{ -f }[/math] instead.

We call the function [math]\displaystyle{ f }[/math] the objective function and call any [math]\displaystyle{ x\in\mathcal{F} }[/math] a feasible solution of the problem. A feasible solution that minimizes [math]\displaystyle{ f(x) }[/math] is called an optimal solution. Our task is to find an optimal solution.

The feasible set [math]\displaystyle{ \mathcal{F} }[/math] is usually given by a number of constraints [math]\displaystyle{ P_1,P_2,\ldots,P_m }[/math], which are predicates defined on [math]\displaystyle{ x }[/math]. An [math]\displaystyle{ x }[/math] is a feasible solution if it satisfies all the constraints.

A linear programming (LP) problem is an optimization problem with a linear objective function subject to a number of linear constraints. Formally, an LP is a problem that can be expressed in the following form:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad c_1x_1+c_2x_2+\cdots+c_nx_n\\ \mbox{subject to} & \quad a_{11}x_1+a_{12}x_2+\cdots+a_{1n}x_n\le b_1\\ & \quad a_{21}x_1+a_{22}x_2+\cdots+a_{2n}x_n\le b_2\\ & \qquad\qquad \vdots\\ & \quad a_{m1}x_1+a_{m2}x_2+\cdots+a_{mn}x_n\le b_m\\ \end{align} }[/math]

where [math]\displaystyle{ a_{ij} }[/math], [math]\displaystyle{ b_i }[/math] and [math]\displaystyle{ c_j }[/math] are constants and [math]\displaystyle{ x_j }[/math] are variables. For maximization problem, or constraints given by "[math]\displaystyle{ \ge }[/math]", we can multiply the coefficients by [math]\displaystyle{ -1 }[/math] and still write the LP in the above form.

We can describe the programming in the vector form. Let [math]\displaystyle{ x }[/math] be a vector of [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ A=(a_{ij}) }[/math] be an [math]\displaystyle{ m\times n }[/math] matrix of constant entries, [math]\displaystyle{ b=(b_1,\ldots,b_m) }[/math] be a vector of [math]\displaystyle{ m }[/math] constant entries, and [math]\displaystyle{ c=(c_1,\ldots,c_n) }[/math] be a vector of [math]\displaystyle{ n }[/math] constant entries. Then an LP is expressed as:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad c^T x\\ \mbox{subject to} & \quad Ax\le b \end{align} }[/math]

We call it the canonical form of linear programming.

The geometry of LPs

For [math]\displaystyle{ n }[/math]-dimensional space [math]\displaystyle{ \mathbb{R}^n }[/math], a linear constraint [math]\displaystyle{ ax\le b }[/math] specifies a halfspace. The feasible set specified by [math]\displaystyle{ m }[/math] constraints together is an intersection of [math]\displaystyle{ m }[/math] halfspaces, thus, a convex polytope.

The convex polytope is defined by [math]\displaystyle{ Ax\le b }[/math].

The LP can be thought as given an [math]\displaystyle{ n }[/math]-dimensional polytope and a linear (affine) function [math]\displaystyle{ f:\mathbb{R}^n\rightarrow \mathbb{R} }[/math], looking for a point [math]\displaystyle{ x }[/math] in the polytope with the smallest function value [math]\displaystyle{ f(x) }[/math].

We can choose a number of constraints in the system [math]\displaystyle{ Ax\le b }[/math], and make them hold with equality. This would define a subspace of [math]\displaystyle{ \mathbb{R}^n }[/math]. In particular, if we choose [math]\displaystyle{ n }[/math] linearly independent constraints to form an [math]\displaystyle{ n\times n }[/math] submatrix [math]\displaystyle{ A' }[/math] and the corresponding [math]\displaystyle{ n }[/math]-dimensional vector [math]\displaystyle{ b' }[/math], solving [math]\displaystyle{ A'x=b' }[/math] would give us exactly one point [math]\displaystyle{ x }[/math]. If this [math]\displaystyle{ x }[/math] is in the polytope, i.e. [math]\displaystyle{ x }[/math] satisfies that [math]\displaystyle{ Ax\le b }[/math], we call such [math]\displaystyle{ x }[/math] a vertex of the polytope [math]\displaystyle{ Ax\le b }[/math]. In the context of LP, it is also called a basic feasible solution (bfs) of the LP.

A key observation for LP is that there is a basic feasible solution which is optimal, i.e. the optimal solution is a vertex of the polytope.

The simplex algorithms

The simplex algorithm by George Dantzig solves the linear programming by moving from vertex to vertex of the convex polytope, each time making some progress towards optimizing the objective function. Eventually the algorithm reaches a vertex which is a local optimum. In a convex set, a locally optimal point is also global optimal, thus the simplex algorithm returns an optimal solution in finite steps.

The problem with the simplex algorithm is that in some bad polytopes, it takes the simplex algorithm exponentially many steps to reach the optimum. The simplex algorithm is actually a class of algorithms defined by various pivoting rules, which describe how to move locally from one vertex to another. The original pivoting rule proposed by Dantzig has an exponentially large worst-case time complexity (though works very good in practice). To-date, it is still unknown whether there exists any deterministic simplex algorithm with sub-exponential worst-case complexity.

People have tried randomized pivoting rules, and there are some amazing progresses have been made:

  • Kalai 1992: there is a randomized simplex algorithm with sub-exponential time complexity.
  • Kelner-Spielman 2006: there is a polynomial time randomized simplex algorithm.

Although we do not know whether there exists deterministic poly-time simplex algorithm, we do know that LP can be solved by deterministic poly-time algorithms, i.e. LP is in P. The following two algorithms use different ideas than the simplex algorithm, and are both in poly-time:

  • The ellipsoid algorithm.
  • Interior point methods.

Although these algorithms guarantee polynomial time complexity in the worst-case, their performances are worse than the simplex algorithms. However, the ideas of these algorithms can be used to solve more general mathematical programmings, such as convex programmings.

An LP solver via random walks

We discuss a new algorithm for solving LP which is based on random sampling, introduced by Bertsimas and Vempala in 2004.

The problem of solving an LP can be reduced to that given a convex polytope [math]\displaystyle{ Ax\le b }[/math], test whether the polytope is empty. We call this problem the "feasibility test".

Suppose we have an LP:

[math]\displaystyle{ \begin{align} \mbox{minimize} & \quad c^T x\\ \mbox{subject to} & \quad Ax\le b \end{align} }[/math]

To see this LP can be solved by feasibility testing, we treat the [math]\displaystyle{ c^Tx\le d }[/math] for some parameter [math]\displaystyle{ d }[/math] as an additional constraint, and define a new polytope [math]\displaystyle{ A'x\le b' }[/math]. The smallest [math]\displaystyle{ d }[/math] that the polytope [math]\displaystyle{ A'x\le b' }[/math] is not empty is the optimal solution to the original LP. We can find this smallest [math]\displaystyle{ d }[/math] by binary search if we can efficiently test the emptiness of a convex polytope.


We assume that the convex polytope [math]\displaystyle{ P }[/math] is contained in the axis-aligned cube of width [math]\displaystyle{ R }[/math] centered at the origin; further if [math]\displaystyle{ P }[/math] is non-empty then it contains a cube of width [math]\displaystyle{ r }[/math]. The parameter [math]\displaystyle{ L }[/math] is equal to [math]\displaystyle{ \log\frac{R}{r} }[/math].

Algorithm: feasibility test
Input: an [math]\displaystyle{ n }[/math]-dimensional convex polytope [math]\displaystyle{ P }[/math] defined by the system [math]\displaystyle{ Ax\le b }[/math].

Output: a point [math]\displaystyle{ x\in P }[/math] in the polytope, or a guarantee that [math]\displaystyle{ P }[/math] is empty.

Let [math]\displaystyle{ K }[/math] be the axis-aligned cube of side length [math]\displaystyle{ R }[/math] and center [math]\displaystyle{ z=\boldsymbol{0} }[/math].
Repeat for [math]\displaystyle{ 2nL }[/math] times do:
If [math]\displaystyle{ Az\le b }[/math], return [math]\displaystyle{ z }[/math].
Pick a constraint violated by [math]\displaystyle{ z }[/math], say that [math]\displaystyle{ A_i z\gt b_i }[/math], and define the halfspace
[math]\displaystyle{ H=\{x\mid A_i x\le A_i z\} }[/math].
Set [math]\displaystyle{ K=K\cap H }[/math]. Uniformly sample [math]\displaystyle{ N }[/math] random points [math]\displaystyle{ y^{(1)},y^{(2)},\ldots,y^{(N)} }[/math] from [math]\displaystyle{ K }[/math] and let
[math]\displaystyle{ z=\frac{1}{N}\sum_{i=1}^Ny^{(i)} }[/math].
Report [math]\displaystyle{ P }[/math] is empty.

The number of samples required in each iteration, [math]\displaystyle{ N=O((\log m)^2) }[/math], where [math]\displaystyle{ m }[/math] is the number of linear constraints.

It is easy to see that at any iteration, [math]\displaystyle{ K }[/math] is a convex set, and [math]\displaystyle{ P\subseteq K }[/math] for the current convex set [math]\displaystyle{ K }[/math].

The idea of the algorithm is based on computing the centroid of the convex set. It is know that if we could compute the exact centroid in each iteration, then the volume of [math]\displaystyle{ K }[/math] drops by a constant factor [math]\displaystyle{ (1-1/e) }[/math] in each iteration, thus reaches the smallest possible volume of a cube of width [math]\displaystyle{ r }[/math] within polynomial number of iterations. But, finding the centroid of a convex polytope, is #P-hard.

The idea behind the algorithm is that an approximate centroid can be computed using [math]\displaystyle{ N=O((\log m)^2) }[/math] random points and the volume of [math]\displaystyle{ K }[/math] is drops by a constant factor with high probability in each iteration with this choice of [math]\displaystyle{ z }[/math].

The uniform sampling in [math]\displaystyle{ K }[/math] is not easy. But since [math]\displaystyle{ K }[/math] is also a convex polytope, we can approximate the uniform sampling by near-uniform sampling from [math]\displaystyle{ K }[/math] by rapid mixing random walks. The random walks used here are similar to the ones used in the volume estimation of convex bodies. We could use the basic grid walk, or more advanced ball walk, or hit-and-run walk.

With rapid mixing random walks, the resulting LP solver runs in poly-time.

The details are omitted. You may check the following paper if interested:

  • Bertsimas and Vempala, “Solving convex programs by random walks.” JACM 2004.