# Randomized Algorithms (Spring 2010)/Approximate counting, linear programming

## Counting Problems

### Complexity model

Recall the class NP of decision problems (the problems with "yes" or "no" answers). Formally, denoting by ${\displaystyle \{0,1\}^{*}}$ the set of all boolean strings of any lengths, a decision problem is a function ${\displaystyle f:\{0,1\}^{*}\rightarrow \{0,1\}}$.

 Definition (NP) A function ${\displaystyle f:\{0,1\}^{*}\rightarrow \{0,1\}}$ is in NP if there exist a polynomial ${\displaystyle p}$ and a poly-time algorithm ${\displaystyle V}$ with boolean output such that for every ${\displaystyle x\in \{0,1\}^{*}}$, ${\displaystyle f(x)=1\Leftrightarrow \exists y\in \{0,1\}^{p(|x|)},{\mbox{such that }}V(x,y)=1\,}$.

Intuitively, the NP class contains all the decision problems such that the answer is "yes" if and only if there exists a certificate which can be verified in poly-time.

In many contexts, we are interested not just in the existence of certificate but actually in counting the number of certificates. This leads to the definition of the class #P (pronounced "sharp p"). Only now the output of the function is a natural number.

 Definition (#P) A function ${\displaystyle f:\{0,1\}^{*}\rightarrow \mathbb {N} }$ is in #P if there exist a polynomial ${\displaystyle p}$ and a poly-time algorithm ${\displaystyle V}$ with boolean output such that for every ${\displaystyle x\in \{0,1\}^{*}}$, ${\displaystyle f(x)=\left|\left\{y\in \{0,1\}^{p(|x|)}\mid V(x,y)=1\right\}\right|}$.

You may notice the similarity between the two definitions. The difference is that now ${\displaystyle f(x)}$ does not just indicates the existence of a certificate, but gives the actual number of certificates.

Why should we care about the number of "certificates" at all? In combinatorics, a counting problem is usually formulated as counting the number of combinatorial objects with some particular structure, such as the number of trees, or Latin squares. The #P class contains the counting problems that the "structure" requirements are easy to check.

Examples of problems in #P
#cycles: given as input a graph ${\displaystyle G}$, count the number of cycles in ${\displaystyle G}$.
#SAT: given as input a boolean formula ${\displaystyle \phi }$, count the number of satisfying assignments for ${\displaystyle \phi }$.

The second example shows that the #P may be harder than NP, because we know that SAT (given a boolean formular, decide the existence of satisfying assignments) is NP-complete, and its counting version #SAT now ask for the exact number of satisfying assignments.

The class FP contains all the functions ${\displaystyle f:\{0,1\}^{*}\rightarrow \mathbb {N} }$ computable by poly-time algorithms. The classes FP and #P are the analogs of P and NP for counting problems.

A reduction from a problem ${\displaystyle f}$ to a problem ${\displaystyle g}$ is a mapping ${\displaystyle \phi }$ which maps instances of ${\displaystyle f}$ to instances of ${\displaystyle g}$ such that for any instance ${\displaystyle x}$ of ${\displaystyle f}$,

${\displaystyle f(x)=g(\phi (x))}$.

In other words, ${\displaystyle \phi }$ "reduces" the task of solving ${\displaystyle f}$ to the task of solving ${\displaystyle g}$. A problem ${\displaystyle f}$ is said to be poly-time reducible to the problem ${\displaystyle g}$, if there exists a reduction ${\displaystyle \phi }$ from ${\displaystyle f}$ to ${\displaystyle g}$ and ${\displaystyle \phi }$ is poly-time computable.

A problem ${\displaystyle f}$ is #P-hard if every problem in #P is poly-time reducible to ${\displaystyle f}$. A problem ${\displaystyle f}$ is #P-complete if ${\displaystyle f\in }$#P and ${\displaystyle f}$ is #P-hard. That is to say, #P-complete problems are the "hardest" problems in #P.

Generally, #P-complete problems are very hard, because if we can solve any of these problems in poly-time, then #P=FP, which implies that NP=P. On the other hand, we do not know whether NP=P could imply that #P=FP.

### FPRAS

 Definition (FPRAS) A polynomial randomized approximation scheme (PRAS) for a problem ${\displaystyle f:\{0,1\}^{*}\rightarrow \mathbb {R} }$ is a randomized algorithm ${\displaystyle A}$ that takes an input instance ${\displaystyle x}$ and a real number ${\displaystyle \epsilon >0}$, and in time polynomial in ${\displaystyle n=|x|}$ returns ${\displaystyle A(x)}$ such that ${\displaystyle \Pr[(1-\epsilon )f(x)\leq A(x)\leq (1+\epsilon )f(x)]\geq {\frac {3}{4}}.}$ A fully polynomial randomized approximation scheme (FPRAS) is a PRAS whose running time is polynomially in both ${\displaystyle n}$ and ${\displaystyle 1/\epsilon }$.

The constant ${\displaystyle {\frac {3}{4}}}$ in the definition can be replaced by any constant in the range ${\displaystyle \left({\frac {1}{2}},1\right)}$ without changing the definition. In fact, it is usually more convenient to parameterize the FPRAS by the approximation error and the probability error.

 Definition (${\displaystyle (\epsilon ,\delta )}$-FPRAS) An ${\displaystyle (\epsilon ,\delta )}$-FPRAS for a problem ${\displaystyle f:\{0,1\}^{*}\rightarrow \mathbb {R} }$ is an FPRAS with ${\displaystyle \Pr[(1-\epsilon )f(x)\leq A(x)\leq (1+\epsilon )f(x)]\geq 1-\delta ,}$ running in time polynomial of ${\displaystyle n}$, ${\displaystyle {\frac {1}{\epsilon }}}$, and ${\displaystyle \log {\frac {1}{\delta }}}$.

## Approximate Counting

Let us consider the following abstract problem.

Let ${\displaystyle U}$ be a finite set of known size, and let ${\displaystyle G\subseteq U}$. We want to compute the size of ${\displaystyle G}$, namely ${\displaystyle |G|}$.

We assume two devices:

• A uniform sampler ${\displaystyle {\mathcal {U}}}$, which uniformly and independently samples a member of ${\displaystyle U}$ upon each calling.
• A membership oracle of ${\displaystyle G}$, denoted ${\displaystyle {\mathcal {O}}}$. Given as the input an ${\displaystyle x\in U}$, ${\displaystyle {\mathcal {O}}(x)}$ indicates whether or not ${\displaystyle x}$ is a member of ${\displaystyle G}$.

Equipped by ${\displaystyle {\mathcal {U}}}$ and ${\displaystyle {\mathcal {O}}}$, we can have the following Monte Carlo algorithm:

• Choose ${\displaystyle N}$ independent samples from ${\displaystyle U}$ by the uniform sampler ${\displaystyle {\mathcal {U}}}$, represented by the random variables ${\displaystyle X_{1},X_{2},\ldots ,X_{N}}$.
• Let ${\displaystyle Y_{i}}$ be the indicator random variable defined as ${\displaystyle Y_{i}={\mathcal {O}}(X_{i})}$, namely, ${\displaystyle Y_{i}}$ indicates whether ${\displaystyle X_{i}\in G}$.
• Define the estimator random variable
${\displaystyle Z={\frac {|U|}{N}}\sum _{i=1}^{N}Y_{i}.}$

It is easy to see that ${\displaystyle \mathbf {E} [Z]=|G|}$ and we might hope that with high probability the value of ${\displaystyle Z}$ is close to ${\displaystyle |G|}$. Formally, ${\displaystyle Z}$ is called an ${\displaystyle \epsilon }$-approximation of ${\displaystyle |G|}$ if

${\displaystyle (1-\epsilon )|G|\leq Z\leq (1+\epsilon )|G|.}$

The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between ${\displaystyle |G|}$ and ${\displaystyle |U|}$

 Theorem (estimator theorem) Let ${\displaystyle \alpha ={\frac {|G|}{|U|}}}$. Then the Monte Carlo method yields an ${\displaystyle \epsilon }$-approximation to ${\displaystyle |G|}$ with probability at least ${\displaystyle 1-\delta }$ provided ${\displaystyle N\geq {\frac {4}{\epsilon ^{2}\alpha }}\ln {\frac {2}{\delta }}}$.
Proof.
 Use the Chernoff bound.
${\displaystyle \square }$

A counting algorithm for the set ${\displaystyle G}$ has to deal with the following three issues:

• Implement the membership oracle ${\displaystyle {\mathcal {O}}}$. This is usually straightforward, or assumed by the model.
• Implement the uniform sampler ${\displaystyle {\mathcal {U}}}$. As we have seen, this is usually approximated by random walks. How to design the random walk and bound its mixing rate is usually technical challenging, if possible at all.
• Deal with exponentially small ${\displaystyle \alpha ={\frac {|G|}{|U|}}}$. This requires us to cleverly choose the universe ${\displaystyle U}$. Sometimes this needs some nontrivial ideas.

### Counting DNFs

A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:

${\displaystyle (x_{1}\wedge {\overline {x_{2}}}\wedge x_{3})\vee (x_{2}\wedge x_{4})\vee ({\overline {x_{1}}}\wedge x_{3}\wedge x_{4})}$.

Note the difference from the conjunctive normal forms (CNF).

Given a DNF formular ${\displaystyle \phi }$ as the input, the problem is to count the number of satisfying assignments of ${\displaystyle \phi }$. This problem is #P-complete.

Naively applying the Monte Carlo method will not give a good answer. Suppose that there are ${\displaystyle n}$ variables. Let ${\displaystyle U=\{\mathrm {true} ,\mathrm {false} \}^{n}}$ be the set of all truth assignments of the ${\displaystyle n}$ variables. Let ${\displaystyle G=\{x\in U\mid \phi (x)=\mathrm {true} \}}$ be the set of satisfying assignments for ${\displaystyle \phi }$. The straightforward use of Monte Carlo method samples ${\displaystyle N}$ assignments from ${\displaystyle U}$ and check how many of them satisfy ${\displaystyle \phi }$. This algorithm fails when ${\displaystyle |G|/|U|}$ is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.

The union of sets problem

We reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.

Let ${\displaystyle V}$ be a finite universe. We are given ${\displaystyle m}$ subsets ${\displaystyle H_{1},H_{2},\ldots ,H_{m}\subseteq V}$. The following assumptions hold:

• For all ${\displaystyle i}$, ${\displaystyle |H_{i}|}$ is computable in poly-time.
• It is possible to sample uniformly from each individual ${\displaystyle H_{i}}$.
• For any ${\displaystyle x\in V}$, it can be determined in poly-time whether ${\displaystyle x\in H_{i}}$.

The goal is to compute the size of ${\displaystyle H=\bigcup _{i=1}^{m}H_{i}}$.

DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula ${\displaystyle \phi }$ is defined on ${\displaystyle n}$ variables, and ${\displaystyle \phi }$ contains ${\displaystyle m}$ clauses ${\displaystyle C_{1},C_{2},\ldots ,C_{m}}$, where clause ${\displaystyle C_{i}}$ has ${\displaystyle k_{i}}$ literals. Without loss of generality, we assume that in each clause, each variable appears at most once.

• ${\displaystyle V}$ is the set of all assignments.
• Each ${\displaystyle H_{i}}$ is the set of satisfying assignments for the ${\displaystyle i}$-th clause ${\displaystyle C_{i}}$ of the DNF formular ${\displaystyle \phi }$. Then the union of sets ${\displaystyle H=\bigcup _{i}H_{i}}$ gives the set of satisfying assignments for ${\displaystyle \phi }$.
• Each clause ${\displaystyle C_{i}}$ is a conjunction (AND) of literals. It is not hard to see that ${\displaystyle |H_{i}|=2^{n-k_{i}}}$, which is efficiently computable.
• Sampling from an ${\displaystyle H_{i}}$ is simple: we just fix the assignments of the ${\displaystyle k_{i}}$ literals of that clause, and sample uniformly and independently the rest ${\displaystyle (n-k_{i})}$ variable assignments.
• For each assignment ${\displaystyle x}$, it is easy to check whether it satisfies a clause ${\displaystyle C_{i}}$, thus it is easy to determine whether ${\displaystyle x\in H_{i}}$.
The coverage algorithm

We now introduce the coverage algorithm for the union of sets problem.

Consider the multiset ${\displaystyle U}$ defined by

${\displaystyle U=H_{1}\uplus H_{2}\uplus \cdots \uplus H_{m}}$,

where ${\displaystyle \uplus }$ denotes the multiset union. It is more convenient to define ${\displaystyle U}$ as the set

${\displaystyle U=\{(x,i)\mid x\in H_{i}\}}$.

For each ${\displaystyle x\in H}$, there may be more than one instances of ${\displaystyle (x,i)\in U}$. We can choose a unique representative among the multiple instances ${\displaystyle (x,i)\in U}$ for the same ${\displaystyle x\in H}$, by choosing the ${\displaystyle (x,i)}$ with the minimum ${\displaystyle i}$, and form a set ${\displaystyle G}$.

Formally, ${\displaystyle G=\{(x,i)\in U\mid \forall (x,j)\in U,j\leq i\}}$. Every ${\displaystyle x\in H}$ corresponds to a unique ${\displaystyle (x,i)\in G}$ where ${\displaystyle i}$ is the smallest among ${\displaystyle x\in H_{i}}$.

It is obvious that ${\displaystyle G\subseteq U}$ and

${\displaystyle |G|=|H|}$.

Therefore, estimation of ${\displaystyle |H|}$ is reduced to estimation of ${\displaystyle |G|}$ with ${\displaystyle G\subseteq U}$. Then ${\displaystyle |G|}$ can have an ${\displaystyle \epsilon }$-approximation with probability ${\displaystyle (1-\delta )}$ in poly-time, if we can uniformly sample from ${\displaystyle U}$ and ${\displaystyle |G|/|U|}$ is suitably small.

An uniform sample from ${\displaystyle U}$ can be implemented as follows:

• generate an ${\displaystyle i\in \{1,2,\ldots ,m\}}$ with probability ${\displaystyle {\frac {|H_{i}|}{\sum _{i=1}^{m}|H_{i}|}}}$;
• uniformly sample an ${\displaystyle x\in H_{i}}$, and return ${\displaystyle (x,i)}$.

It is easy to see that this gives a uniform member of ${\displaystyle U}$. The above sampling procedure is poly-time because each ${\displaystyle |H_{i}|}$ can be computed in poly-time, and sampling uniformly from each ${\displaystyle H_{i}}$ is poly-time.

We now only need to lower bound the ratio

${\displaystyle \alpha ={\frac {|G|}{|U|}}}$.

We claim that

${\displaystyle \alpha \geq {\frac {1}{m}}}$.

It is easy to see this, because each ${\displaystyle x\in H}$ has at most ${\displaystyle m}$ instances of ${\displaystyle (x,i)}$ in ${\displaystyle U}$, and we already know that ${\displaystyle |G|=|H|}$.

Due to the estimator theorem, this needs ${\displaystyle {\frac {4m}{\epsilon ^{2}}}\ln {\frac {2}{\delta }}}$ uniform random samples from ${\displaystyle U}$.

This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.

### Permanents and perfect matchings

Let ${\displaystyle U=\{u_{1},u_{2},\ldots ,u_{n}\}}$ and ${\displaystyle V=\{v_{1},v_{2},\ldots ,v_{n}\}}$. Consider a bipartite graph ${\displaystyle G(U,V,E)}$. An ${\displaystyle M\subseteq E}$ is a perfect matching of ${\displaystyle G}$ if every vertex of ${\displaystyle G}$ has exactly one edge in ${\displaystyle M}$ adjacent to it.

Given a bipartite graph ${\displaystyle G(U,V,E)}$ with ${\displaystyle |U|=|V|=n}$, we want to count the number of perfect matchings of ${\displaystyle G}$. This problem can be reduced to computing the permanent of a square matrix.

 Definition (permanent) Let ${\displaystyle Q}$ be an ${\displaystyle n\times n}$ matrix. The permanent of the matrix is defined as ${\displaystyle \mathrm {per} (Q)=\sum _{\pi \in \mathbb {S} _{n}}\prod _{i=1}^{n}Q_{i,\pi (i)}}$, where ${\displaystyle \mathbb {S} _{n}}$ is the symmetric group of permutation of size ${\displaystyle n}$.

If we multiply each term of the sum the sign of the permutation, then it gives us the determinant of the matrix,

${\displaystyle \det(Q)=\sum _{\pi \in \mathbb {S} _{n}}\operatorname {sgn} (\pi )\prod _{i=1}^{n}Q_{i,\pi (i)}}$,

where ${\displaystyle \operatorname {sgn} (\pi )}$, the sign of a permutation, is either ${\displaystyle -1}$ or ${\displaystyle +1}$, according to whether the minimum number of pair-wise interchanges to achieve ${\displaystyle \pi }$ from ${\displaystyle (1,2,\ldots ,n)}$ is odd or even.

Unlike the determinants, which are computable in poly-time, permanents are hard to compute, as permanents can be used to count the number of perfect matchings in a bipartite graph, which is #P-complete.

A bipartite graph ${\displaystyle G(U,V,E)}$ with ${\displaystyle |U|=|V|=n}$ can be represented by an ${\displaystyle n\times n}$ matrix ${\displaystyle Q}$ with 0-1 entries as follows:

• Each row of ${\displaystyle Q}$ corresponds to a vertex in ${\displaystyle U}$ and each column of ${\displaystyle Q}$ corresponds to a vertex in ${\displaystyle V}$.
${\displaystyle Q_{ij}={\begin{cases}1&{\mbox{if }}i\sim j,\\0&{\mbox{otherwise}}.\end{cases}}}$

Note the subtle difference between the definition of ${\displaystyle Q}$ and the adjacency matrix.

Each perfect matching corresponds to a permutation ${\displaystyle \pi }$ of ${\displaystyle U}$ with ${\displaystyle (u,\pi (u))\in E}$ for every ${\displaystyle u\in U}$, which corresponds to a permutation ${\displaystyle \pi \in \mathbb {S} _{n}}$ with ${\displaystyle \prod _{i=1}^{n}Q_{i,\pi (i)}=1}$. It is than easy to see that ${\displaystyle \mathrm {per} (Q)}$ gives the number of perfect matchings in the bipartite graph ${\displaystyle G}$.

It is known that counting the number of perfect matchings in a bipartite graph is #P-hard. Since this problem can be reduced to computing the permanent, thus the problem of computing the permanents is also #P-hard.

Now we show that with randomization, we can approximate the number of perfect matchings in a bipartite graph. In particular, we will give an FPRAS for counting the perfect matchings in a dense bipartite graph.

#### The Jerrum-Sinclair algorithm

Fix a bipartite graph ${\displaystyle G(U,V,E)}$ with ${\displaystyle |U|=|V|=n}$. Let ${\displaystyle {\mathcal {M}}_{k}}$ be the set of matchings of size ${\displaystyle k}$ in ${\displaystyle G}$, and let ${\displaystyle m_{k}=|{\mathcal {M}}_{k}|}$. Thus, ${\displaystyle {\mathcal {M}}_{k}}$ is the set of perfect matchings in ${\displaystyle G}$, and our goal is to compute ${\displaystyle m_{n}}$.

Let ${\displaystyle r_{k}={\frac {m_{k}}{m_{k-1}}}}$ for ${\displaystyle 1. Then ${\displaystyle m_{k}=m_{k-1}r_{k}}$, which gives us a recursion to compute the ${\displaystyle m_{n}}$, as

${\displaystyle m_{n}=m_{1}{\frac {m_{2}}{m_{1}}}\cdot {\frac {m_{3}}{m_{2}}}\cdots {\frac {m_{n}}{m_{n-1}}}=m_{1}\prod _{k=2}^{n}r_{k}}$,

where ${\displaystyle m_{1}=|{\mathcal {M}}_{1}|}$ is the number of matchings of size 1 in the bipartite graph ${\displaystyle G}$, which is just the number of edges in ${\displaystyle G}$. Therefore, ${\displaystyle m_{n}}$ can be computed once we know ${\displaystyle r_{k}}$ for ${\displaystyle 1.

Each ${\displaystyle r_{k}}$ can be estimated by sampling uniformly from the set ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$. The algorithm for estimating ${\displaystyle m_{n}}$ is outlined as:

1. For each ${\displaystyle 1, have an FPRAS for computing the ${\displaystyle r_{k}}$ by uniform sampling sufficiently many members from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$ as
• uniformly sample ${\displaystyle N}$ matching from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$, for some polynomially large ${\displaystyle N}$;
• assuming that there are ${\displaystyle X}$ sampled matchings of size ${\displaystyle k}$, return ${\displaystyle r_{k}}$ as ${\displaystyle r_{k}={\frac {X}{N-X}}}$.
2. Compute ${\displaystyle m_{n}}$ as ${\displaystyle m_{n}=m_{1}\prod _{k=2}^{n}r_{k}}$, where ${\displaystyle m_{1}=|E|}$.

There are several issues that we have to deal with in order to have a fully functional FPRAS for counting perfect matchings.

• By taking the product of ${\displaystyle r_{k}}$'s, the errors for individual ${\displaystyle r_{k}}$'s add up.
• In order to accurately estimate ${\displaystyle r_{k}}$ by sampling from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$, the ratio ${\displaystyle r_{k}}$ should be within the range ${\displaystyle \left[{\frac {1}{\alpha }},\alpha \right]}$ for some ${\displaystyle \alpha }$ within polynomial of ${\displaystyle n}$.
• Implement the uniform sampling from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$.
Estimator for each ${\displaystyle r_{k}}$

Assuming that ${\displaystyle r_{k}=m_{k}/m_{k-1}}$ is well-bounded (meaning that ${\displaystyle n^{-c}\leq r_{k}\leq n^{c}}$ for some constant ${\displaystyle c}$), given a uniform sampler from the set ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$, we can directly apply the Monte Carlo method to estimate the ratio ${\displaystyle r_{k}}$, and prove the similar estimator theorem. See Exercise 11.7 of the textbook for details.

We need to guarantee that ${\displaystyle r_{k}}$ is well-bounded. It is known that this is true for dense bipartite graphs.

 Theorem (Broder 1986) For any bipartite graph ${\displaystyle G(V_{1},V_{2},E)}$ with ${\displaystyle |V_{1}|=|V_{2}|=n}$, and minimum degree at least ${\displaystyle n/2}$, ${\displaystyle \Omega (n^{2})=r_{k}=O(n^{2})}$ for all ${\displaystyle k}$.
 Sketch of the proof. The idea is to show that in a dense bipartite graph, any matching of size ${\displaystyle k-1}$ can be modified into a matching of size ${\displaystyle k}$ with at most two operations of rotation and augmentation, introduced in the last lecture. ${\displaystyle \square }$
Accumulation of errors

The Monte Carlo estimation of each individual ${\displaystyle r_{k}}$ leaves some bounded errors (there are two types of errors: the approximation error ${\displaystyle \epsilon }$ and the probabilistic error ${\displaystyle \delta }$). Within polynomial running time, we can make these errors sufficiently small, so that the accumulated error resulting from the product of ${\displaystyle n}$ ratios is bounded. See Exercise 11.8 in the textbook for details.

Near-uniform sampling from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$

In the last lecture, we have shown that by random walk, we can sample a near-uniform member of ${\displaystyle {\mathcal {M}}_{n}\cup {\mathcal {M}}_{n-1}}$ in poly-time. To sample from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$, we can add additional vertice and edges to the original bipartite graph and sample near-uniformly from ${\displaystyle {\mathcal {M}}_{n}\cup {\mathcal {M}}_{n-1}}$ of the modified bipartite graph, which yields a near-uniform sampler from ${\displaystyle {\mathcal {M}}_{k}\cup {\mathcal {M}}_{k-1}}$ of the original graph. The detailed method and analysis is in Section 11.3.4 and the Exercise 11.11 in the textbook.

### Volume estimation

We consider the problem of computing the volume of a given convex body ${\displaystyle K}$ in ${\displaystyle n}$ dimensions.

We use ${\displaystyle \Upsilon (K)\,}$ to denote the volume of the convex body ${\displaystyle K}$. Abstractly, the problem is that given as input a convex body ${\displaystyle K}$ in ${\displaystyle n}$ dimensions, return the ${\displaystyle \Upsilon (K)\,}$.

We should be more specific about the input model. Since we allow an arbitrary convex body as input, it is not even clear how to describe the body. We assume that ${\displaystyle K}$ is described by means of a membership oracle ${\displaystyle {\mathcal {O}}}$, such that for a query of an ${\displaystyle n}$-dimensional point ${\displaystyle x}$, ${\displaystyle {\mathcal {O}}(x)}$ indicates whether ${\displaystyle x\in K}$.

For example, the ${\displaystyle n}$-dimensional convex body defined by the intersection of ${\displaystyle m}$ half-spaces, which is the set of feasible solutions to a system of ${\displaystyle m}$ linear constraints, can be described as

${\displaystyle Ax\leq {\boldsymbol {b}}}$,

for some ${\displaystyle m\times n}$ matrix ${\displaystyle A}$, and ${\displaystyle m}$-dimensional vector ${\displaystyle b}$. For a query of an ${\displaystyle x}$, the membership oracle ${\displaystyle {\mathcal {O}}}$ just check whether ${\displaystyle Ax\leq b}$.

For deterministic algorithms, there are negative news for this problem.

 Theorem (Bárány-Füredi 1987) Suppose that a deterministic poly-time algorithm uses the membership oracle for a convex body ${\displaystyle K}$ in ${\displaystyle n}$ dimensions, and generates an upper bound ${\displaystyle \Upsilon _{u}\,}$ and a lower bound ${\displaystyle \Upsilon _{\ell }\,}$ on the volume ${\displaystyle \Upsilon (K)\,}$ of ${\displaystyle K}$. Then, there is a convex body ${\displaystyle K}$ and a constant ${\displaystyle c}$ such that ${\displaystyle {\frac {\Upsilon _{u}}{\Upsilon _{\ell }}}\geq c\left({\frac {n}{\log n}}\right)^{n}}$.

That is said, with deterministic algorithms, we cannot even approximate the volume within a wildly loose range.

Dyer-Frieze-Kannan come up with an idea of estimating the volume of ${\displaystyle K}$ by sampling near-uniformly from convex sets. They reduce the problem of computing approximately the volume of convex bodies to this sampling problem and thus give the first FPRAS for the volume of convex bodies.

For any convex body ${\displaystyle K}$, it encloses some ${\displaystyle n}$-dimensional ball and is also enclosed by another ball with larger radius. We assume that the convex body ${\displaystyle K}$ encloses the unit ball around the origin, and is enclosed by a larger ball round the origin with polynomially large radius. Formally, we assume that

${\displaystyle B(0,1)\subseteq K\subseteq B(0,n^{c})}$ for some constant ${\displaystyle c}$,

where ${\displaystyle B(p,r)}$ denotes a ball of radius ${\displaystyle r}$ with ${\displaystyle p}$ as center, i.e. ${\displaystyle B(p,r)=\{x\mid \|x-p\|\leq r\}}$. We can make this assumption because it was proved by Grötsel-Lovász-Schrijver that for any convex body ${\displaystyle K}$, there exists a linear transformation ${\displaystyle \tau }$ which can be found within poly-time such that ${\displaystyle \tau K}$ transforms the ${\displaystyle K}$ to a convex body that satisfies the assumption, and preserves the ratio of the volumes of the balls to the convex body.

The volumes of the balls are easy to compute. If only the ratio between the convex body and the ball which encloses it, is sufficiently large, then we can apply the Monte Carlo method to estimate ${\displaystyle \Upsilon (K)\,}$ by uniformly sampling from the ball.

However, in high-dimension, the ratio between the volume of a convex body and the volume of ball which encloses it, can be exponentially small. This is caused by the so called the "curse of dimensionality".

Instead of having an outer ball and an inner ball, we define a sequence of balls:

${\displaystyle B_{0}=B(0,\lambda ^{0}),B_{1}=B(0,\lambda ^{1}),B_{2}=B(0,\lambda ^{2}),\ldots ,B_{m}=B(0,\lambda ^{m})}$,

where ${\displaystyle \lambda =(1+{\frac {1}{n}})}$ and ${\displaystyle m}$ is the smallest positive integer such that ${\displaystyle \lambda ^{m}\geq n^{c}}$. Therefore, the inner ball ${\displaystyle B(0,1)=B_{0}}$, the outer ball ${\displaystyle B(0,n^{c})\subseteq B_{m}}$, and ${\displaystyle m}$ is within polynomial of ${\displaystyle n}$. In fact, ${\displaystyle m\approx cn\ln n}$.

This sequence of balls naturally defines a sequence of convex bodies by intersections as ${\displaystyle K_{i}=B_{i}\cap K}$. It is obvious that

${\displaystyle B(0,1)=K_{0}\subseteq K_{1}\subseteq K_{2}\subseteq \cdots \subseteq K_{m}=K}$.

Balls are convex, and since the intersection of convex bodies is still convex, the sequence of ${\displaystyle K_{i}}$ is a sequence of convex bodies.

We have the telescopic product:

${\displaystyle {\frac {\Upsilon (K_{0})}{\Upsilon (K_{1})}}\cdot {\frac {\Upsilon (K_{1})}{\Upsilon (K_{2})}}\cdots {\frac {\Upsilon (K_{m-1})}{\Upsilon (K_{m})}}={\frac {\Upsilon (K_{0})}{\Upsilon (K_{m})}}={\frac {\Upsilon (B(0,1))}{\Upsilon (K)}}.}$

Therefore, the volume ${\displaystyle \Upsilon (K)\,}$ can be computed as

${\displaystyle \Upsilon (K)=\Upsilon (B(0,1))\cdot \prod _{i=1}^{m}{\frac {\Upsilon (K_{i})}{\Upsilon (K_{i-1})}}}$.

The volume of unite ball ${\displaystyle \Upsilon (B(0,1))\,}$ can be precisely computed in poly-time. Each ${\displaystyle {\frac {\Upsilon (K_{i})}{\Upsilon (K_{i-1})}}}$ is computed by near-uniform sampling from ${\displaystyle K_{i}}$, which encloses ${\displaystyle K_{i-1}}$.

Another observation is that the ratio ${\displaystyle {\frac {\Upsilon (K_{i})}{\Upsilon (K_{i-1})}}}$ is well-bounded. Recall that ${\displaystyle K_{i}=B(0,\lambda ^{i})\cap K}$ where ${\displaystyle \lambda =(1+{\frac {1}{n}})}$. It can be proved that in ${\displaystyle n}$ dimensions, the volume of ${\displaystyle K_{i}}$ is at most ${\displaystyle \lambda ^{n}}$ times the ${\displaystyle K_{i-1}}$, thus the ratio ${\displaystyle {\frac {\Upsilon (K_{i})}{\Upsilon (K_{i-1})}}}$ is at most ${\displaystyle \lambda ^{n}=O(1)}$. By the estimator theorem, we can have an FPRAS for the ratio ${\displaystyle {\frac {\Upsilon (K_{i})}{\Upsilon (K_{i-1})}}}$ if we can uniformly sample from ${\displaystyle K_{i}}$.

Uniformly sampling from an arbitrary convex body is replaced by near-uniform sampling achieved by random walks. In the original walk of Dyer-Frieze-Kannan, they consider the random walk over ${\displaystyle n}$-dimensional discrete grid points enclosed by ${\displaystyle K}$, and prove that the walk is rapid mixing. This gives us the first FPRAS for volume estimation which runs in ${\displaystyle {\tilde {O}}(n^{23})}$, where ${\displaystyle {\tilde {O}}(\cdot )}$ ignores the polylogarithmic factors.

The time bound was later improved by a series of works, each introducing some new ideas.

 complexity new ingredient(s) Dyer-Frieze-Kannan 1991 ${\displaystyle n^{23}}$ everything Lovász-Simonovits 1990 ${\displaystyle n^{16}}$ localization lemma Applegate-Kannan 1990 ${\displaystyle n^{10}}$ logconcave sampling Lovász 1990 ${\displaystyle n^{10}}$ ball walk Dyer-Frieze 1991 ${\displaystyle n^{8}}$ better error analysis Lovász-Simonovits 1993 ${\displaystyle n^{7}}$ many improvements Kannan-Lovász-Simonovits 1997 ${\displaystyle n^{5}}$ isotropy, speedy walk Lovász-Vempala 2003 ${\displaystyle n^{4}}$ simulated annealing, hit-and-run

(cited from "Geometric Random Walks: A Survey" by Santosh Vempala.)

The current best upper bound is ${\displaystyle {\tilde {O}}(n^{4})}$ due to Lovász and Vempala in 2003. It is conjectured that the optimal bound is ${\displaystyle \Theta (n^{3})}$.

## Linear Programming

Given a function ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$ and a set ${\displaystyle {\mathcal {F}}\subseteq \mathbb {R} ^{n}}$, an optimization problem is the problem of finding an ${\displaystyle x\in {\mathcal {F}}}$ with the optimal (minimum) ${\displaystyle f(x)}$. Formally, the problem can be expressed as:

{\displaystyle {\begin{aligned}{\mbox{minimize}}&\quad f(x)\\{\mbox{subject to}}&\quad x\in {\mathcal {F}}\end{aligned}}}

where ${\displaystyle x\in {\mathcal {R}}^{n}}$ is a vector of ${\displaystyle n}$ variables. For the problem of maximizing a function ${\displaystyle f}$, we just minimizes ${\displaystyle -f}$ instead.

We call the function ${\displaystyle f}$ the objective function and call any ${\displaystyle x\in {\mathcal {F}}}$ a feasible solution of the problem. A feasible solution that minimizes ${\displaystyle f(x)}$ is called an optimal solution. Our task is to find an optimal solution.

The feasible set ${\displaystyle {\mathcal {F}}}$ is usually given by a number of constraints ${\displaystyle P_{1},P_{2},\ldots ,P_{m}}$, which are predicates defined on ${\displaystyle x}$. An ${\displaystyle x}$ is a feasible solution if it satisfies all the constraints.

A linear programming (LP) problem is an optimization problem with a linear objective function subject to a number of linear constraints. Formally, an LP is a problem that can be expressed in the following form:

{\displaystyle {\begin{aligned}{\mbox{minimize}}&\quad c_{1}x_{1}+c_{2}x_{2}+\cdots +c_{n}x_{n}\\{\mbox{subject to}}&\quad a_{11}x_{1}+a_{12}x_{2}+\cdots +a_{1n}x_{n}\leq b_{1}\\&\quad a_{21}x_{1}+a_{22}x_{2}+\cdots +a_{2n}x_{n}\leq b_{2}\\&\qquad \qquad \vdots \\&\quad a_{m1}x_{1}+a_{m2}x_{2}+\cdots +a_{mn}x_{n}\leq b_{m}\\\end{aligned}}}

where ${\displaystyle a_{ij}}$, ${\displaystyle b_{i}}$ and ${\displaystyle c_{j}}$ are constants and ${\displaystyle x_{j}}$ are variables. For maximization problem, or constraints given by "${\displaystyle \geq }$", we can multiply the coefficients by ${\displaystyle -1}$ and still write the LP in the above form.

We can describe the programming in the vector form. Let ${\displaystyle x}$ be a vector of ${\displaystyle n}$ variables. Let ${\displaystyle A=(a_{ij})}$ be an ${\displaystyle m\times n}$ matrix of constant entries, ${\displaystyle b=(b_{1},\ldots ,b_{m})}$ be a vector of ${\displaystyle m}$ constant entries, and ${\displaystyle c=(c_{1},\ldots ,c_{n})}$ be a vector of ${\displaystyle n}$ constant entries. Then an LP is expressed as:

{\displaystyle {\begin{aligned}{\mbox{minimize}}&\quad c^{T}x\\{\mbox{subject to}}&\quad Ax\leq b\end{aligned}}}

We call it the canonical form of linear programming.

### The geometry of LPs

For ${\displaystyle n}$-dimensional space ${\displaystyle \mathbb {R} ^{n}}$, a linear constraint ${\displaystyle ax\leq b}$ specifies a halfspace. The feasible set specified by ${\displaystyle m}$ constraints together is an intersection of ${\displaystyle m}$ halfspaces, thus, a convex polytope.

The convex polytope is defined by ${\displaystyle Ax\leq b}$.

The LP can be thought as given an ${\displaystyle n}$-dimensional polytope and a linear (affine) function ${\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} }$, looking for a point ${\displaystyle x}$ in the polytope with the smallest function value ${\displaystyle f(x)}$.

We can choose a number of constraints in the system ${\displaystyle Ax\leq b}$, and make them hold with equality. This would define a subspace of ${\displaystyle \mathbb {R} ^{n}}$. In particular, if we choose ${\displaystyle n}$ linearly independent constraints to form an ${\displaystyle n\times n}$ submatrix ${\displaystyle A'}$ and the corresponding ${\displaystyle n}$-dimensional vector ${\displaystyle b'}$, solving ${\displaystyle A'x=b'}$ would give us exactly one point ${\displaystyle x}$. If this ${\displaystyle x}$ is in the polytope, i.e. ${\displaystyle x}$ satisfies that ${\displaystyle Ax\leq b}$, we call such ${\displaystyle x}$ a vertex of the polytope ${\displaystyle Ax\leq b}$. In the context of LP, it is also called a basic feasible solution (bfs) of the LP.

A key observation for LP is that there is a basic feasible solution which is optimal, i.e. the optimal solution is a vertex of the polytope.

### The simplex algorithms

The simplex algorithm by George Dantzig solves the linear programming by moving from vertex to vertex of the convex polytope, each time making some progress towards optimizing the objective function. Eventually the algorithm reaches a vertex which is a local optimum. In a convex set, a locally optimal point is also global optimal, thus the simplex algorithm returns an optimal solution in finite steps.

The problem with the simplex algorithm is that in some bad polytopes, it takes the simplex algorithm exponentially many steps to reach the optimum. The simplex algorithm is actually a class of algorithms defined by various pivoting rules, which describe how to move locally from one vertex to another. The original pivoting rule proposed by Dantzig has an exponentially large worst-case time complexity (though works very good in practice). To-date, it is still unknown whether there exists any deterministic simplex algorithm with sub-exponential worst-case complexity.

People have tried randomized pivoting rules, and there are some amazing progresses have been made:

• Kalai 1992: there is a randomized simplex algorithm with sub-exponential time complexity.
• Kelner-Spielman 2006: there is a polynomial time randomized simplex algorithm.

Although we do not know whether there exists deterministic poly-time simplex algorithm, we do know that LP can be solved by deterministic poly-time algorithms, i.e. LP is in P. The following two algorithms use different ideas than the simplex algorithm, and are both in poly-time:

• The ellipsoid algorithm.
• Interior point methods.

Although these algorithms guarantee polynomial time complexity in the worst-case, their performances are worse than the simplex algorithms. However, the ideas of these algorithms can be used to solve more general mathematical programmings, such as convex programmings.

### An LP solver via random walks

We discuss a new algorithm for solving LP which is based on random sampling, introduced by Bertsimas and Vempala in 2004.

The problem of solving an LP can be reduced to that given a convex polytope ${\displaystyle Ax\leq b}$, test whether the polytope is empty. We call this problem the "feasibility test".

Suppose we have an LP:

{\displaystyle {\begin{aligned}{\mbox{minimize}}&\quad c^{T}x\\{\mbox{subject to}}&\quad Ax\leq b\end{aligned}}}

To see this LP can be solved by feasibility testing, we treat the ${\displaystyle c^{T}x\leq d}$ for some parameter ${\displaystyle d}$ as an additional constraint, and define a new polytope ${\displaystyle A'x\leq b'}$. The smallest ${\displaystyle d}$ that the polytope ${\displaystyle A'x\leq b'}$ is not empty is the optimal solution to the original LP. We can find this smallest ${\displaystyle d}$ by binary search if we can efficiently test the emptiness of a convex polytope.

We assume that the convex polytope ${\displaystyle P}$ is contained in the axis-aligned cube of width ${\displaystyle R}$ centered at the origin; further if ${\displaystyle P}$ is non-empty then it contains a cube of width ${\displaystyle r}$. The parameter ${\displaystyle L}$ is equal to ${\displaystyle \log {\frac {R}{r}}}$.

 Algorithm: feasibility test Input: an ${\displaystyle n}$-dimensional convex polytope ${\displaystyle P}$ defined by the system ${\displaystyle Ax\leq b}$. Output: a point ${\displaystyle x\in P}$ in the polytope, or a guarantee that ${\displaystyle P}$ is empty. Let ${\displaystyle K}$ be the axis-aligned cube of side length ${\displaystyle R}$ and center ${\displaystyle z={\boldsymbol {0}}}$. Repeat for ${\displaystyle 2nL}$ times do: If ${\displaystyle Az\leq b}$, return ${\displaystyle z}$. Pick a constraint violated by ${\displaystyle z}$, say that ${\displaystyle A_{i}z>b_{i}}$, and define the halfspace ${\displaystyle H=\{x\mid A_{i}x\leq A_{i}z\}}$. Set ${\displaystyle K=K\cap H}$. Uniformly sample ${\displaystyle N}$ random points ${\displaystyle y^{(1)},y^{(2)},\ldots ,y^{(N)}}$ from ${\displaystyle K}$ and let ${\displaystyle z={\frac {1}{N}}\sum _{i=1}^{N}y^{(i)}}$. Report ${\displaystyle P}$ is empty.

The number of samples required in each iteration, ${\displaystyle N=O((\log m)^{2})}$, where ${\displaystyle m}$ is the number of linear constraints.

It is easy to see that at any iteration, ${\displaystyle K}$ is a convex set, and ${\displaystyle P\subseteq K}$ for the current convex set ${\displaystyle K}$.

The idea of the algorithm is based on computing the centroid of the convex set. It is know that if we could compute the exact centroid in each iteration, then the volume of ${\displaystyle K}$ drops by a constant factor ${\displaystyle (1-1/e)}$ in each iteration, thus reaches the smallest possible volume of a cube of width ${\displaystyle r}$ within polynomial number of iterations. But, finding the centroid of a convex polytope, is #P-hard.

The idea behind the algorithm is that an approximate centroid can be computed using ${\displaystyle N=O((\log m)^{2})}$ random points and the volume of ${\displaystyle K}$ is drops by a constant factor with high probability in each iteration with this choice of ${\displaystyle z}$.

The uniform sampling in ${\displaystyle K}$ is not easy. But since ${\displaystyle K}$ is also a convex polytope, we can approximate the uniform sampling by near-uniform sampling from ${\displaystyle K}$ by rapid mixing random walks. The random walks used here are similar to the ones used in the volume estimation of convex bodies. We could use the basic grid walk, or more advanced ball walk, or hit-and-run walk.

With rapid mixing random walks, the resulting LP solver runs in poly-time.

The details are omitted. You may check the following paper if interested:

• Bertsimas and Vempala, “Solving convex programs by random walks.” JACM 2004.