随机算法 (Spring 2014)/The Monte Carlo Method: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
 
(37 intermediate revisions by the same user not shown)
Line 11: Line 11:
*Choose <math>N</math> independent samples from <math>U</math> by the uniform sampler <math>\mathcal{U}</math>, represented by the random variables <math>X_1,X_2,\ldots, X_N</math>.  
*Choose <math>N</math> independent samples from <math>U</math> by the uniform sampler <math>\mathcal{U}</math>, represented by the random variables <math>X_1,X_2,\ldots, X_N</math>.  
* Let <math>Y_i</math> be the indicator random variable defined as <math>Y_i=\mathcal{O}(X_i)</math>, namely, <math>Y_i</math> indicates whether <math>X_i\in G</math>.
* Let <math>Y_i</math> be the indicator random variable defined as <math>Y_i=\mathcal{O}(X_i)</math>, namely, <math>Y_i</math> indicates whether <math>X_i\in G</math>.
* Define the estimator random variable
* Define the estimator random variable for <math>|G|</math> as
::<math>Z=\frac{|U|}{N}\sum_{i=1}^N Y_i.</math>
::<math>Z=\frac{|U|}{N}\sum_{i=1}^N Y_i</math>
* Define the estimator random variable for <math>\alpha=\frac{|G|}{|U|}</math> as
::<math>\overline{Y}=\frac{1}{N}\sum_{i=1}^N Y_i</math>.


It is easy to see that <math>\mathbf{E}[Z]=|G|</math> and we might hope that with high probability the value of <math>Z</math> is close to <math>|G|</math>. Formally, <math>Z</math> is called an '''<math>\epsilon</math>-approximation''' of <math>|G|</math> if
It is easy to see that <math>\mathbf{E}[Z]=|G|</math> and <math>\mathbf{E}[\overline{Y}]=\alpha</math>.
We might hope that with high probability the values of <math>Z</math> and <math>\overline{Y}</math> are close to their respective expectations. Formally, <math>Z</math> is called an '''<math>\epsilon</math>-approximation''' of <math>|G|</math> if
:<math>
:<math>
(1-\epsilon)|G|\le Z\le (1+\epsilon)|G|.
(1-\epsilon)|G|\le Z\le (1+\epsilon)|G|.
</math>
And <math>\overline{Y}</math> is called an '''<math>(\epsilon,\delta)</math>-estimator''' for <math>\alpha</math> if
:<math>
\Pr[|\overline{Y}-\alpha|\le \epsilon]\ge 1-\delta.
</math>
</math>


The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between <math>|G|</math> and <math>|U|</math>
The following general theorem bounds the reliance between the accuracy of estimation and the number of samples.


{{Theorem
{{Theorem
|Theorem (estimator theorem)|
|Theorem (estimator theorem)|
:Let <math>\alpha=\frac{|G|}{|U|}</math>. Then the Monte Carlo method yields an <math>\epsilon</math>-approximation to <math>|G|</math> with probability at least <math>1-\delta</math> provided
:Let <math>\alpha=\frac{|G|}{|U|}</math>. The following holds:
:1. <math>Z</math> is an <math>\epsilon</math>-approximation to <math>|G|</math> with probability at least <math>1-\delta</math> if
::<math>N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta}</math>.
::<math>N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta}</math>.
:2. <math>\overline{Y}</math> is an <math>(\epsilon,\delta)</math>-estimator for <math>\alpha</math> if
::<math>N\ge\frac{4}{\epsilon^2}\ln\frac{2}{\delta}</math>.
}}
}}
{{Proof|  
{{Proof|  
Line 39: Line 49:
Note the difference from the conjunctive normal forms (CNF).
Note the difference from the conjunctive normal forms (CNF).


Given a DNF formular <math>\phi</math> as the input, the problem is to count the number of satisfying assignments of <math>\phi</math>. This problem is [http://en.wikipedia.org/wiki/Sharp-P-complete '''#P-complete'''].
Given a DNF formular <math>\phi</math> as the input, the problem is to count the number of satisfying assignments of <math>\phi</math>. This problem is [http://en.wikipedia.org/wiki/Sharp-P-complete '''#P-complete''']. So we do not expect to have a polynomial-time algorithm for exact solving the problem. Instead we are seeking for approximation algorithms. Interestingly, even though the computational complexity of the problem is so high, it admits some very nice approximation algorithm which has very good approximation ratio (theoretically speaking). We need to introduce some general concepts for approximation algorithms.
 
{{Theorem|FPTAS (Fully Polynomial-Time Approximation Scheme)|
:Consider a computational problem <math>f:\{0,1\}^*\to \mathbb{Z}^+</math> whose outputs are positive integers.
 
:A '''FPTAS (fully polynomial-time approximation scheme)''' for <math>f</math> is an algorithm <math>\mathcal{A}</math> that takes the following inputs:
:*an input instance <math>x</math> of problem <math>f</math>, which we use <math>n=|x|</math> to denote its length;
:*a real number <math>0<\epsilon<1</math>;
:and in time polynomial in both <math>n</math> and <math>\frac{1}{\epsilon}</math>, returns an output <math>\mathcal{A}(x,\epsilon)</math> such that
::<math>
(1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon)\le (1+\epsilon)f(x).
</math>
}}
 
The adverb "fully" stresses the polynomial reliance on not just the input size <math>n</math> but also the approximation parameter <math>\frac{1}{\epsilon}</math>. An FPTAS may approximate the true value of the output within any accuracy in polynomial time, though the polynomial becomes larger when the approximation is more accurate. Moreover, the running time grows just polynomially with repeat to the accuracy bounds. Theoretically speaking, FPTAS is the best we can get for computational hard problems, though in practice sometimes the polynomial bound might be too large even for a moderate <math>\epsilon</math>.
 
The FPRAS (fully polynomial-time randomized approximation scheme), on the other hand, is the randomized relaxation of FPTAS.
 
{{Theorem|FPRAS (Fully Polynomial-time Randomized Approximation Scheme)|
:Consider a computational problem <math>f:\{0,1\}^*\to \mathbb{Z}^+</math> whose outputs are positive integers.
 
:A '''FPRAS (fully polynomial-time randomized approximation scheme)''' for <math>f</math> is a randomized algorithm <math>\mathcal{A}</math> that takes the following inputs:
:*an input instance <math>x</math> of problem <math>f</math>, which we use <math>n=|x|</math> to denote its length;
:*two real numbers <math>0<\epsilon,\delta<1</math>;
:and in time polynomial in <math>n</math>, <math>\frac{1}{\epsilon}</math>, and <math>\log\frac{1}{\delta}</math>, returns an output <math>\mathcal{A}(x,\epsilon,\delta)</math> such that
::<math>
\Pr\left[(1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon,\delta)\le (1+\epsilon)f(x)\right]\ge 1-\delta.
</math>
}}
 
Note that the reliance on the ''confidence error'' <math>\delta</math> is polynomial in <math>\log\frac{1}{\delta}</math>, instead of <math>\frac{1}{\delta}</math>. This is because due to the Chernoff bound, the confidence error can be reduced in an exponential rate by independent repetitions. This motivates the following simpler but equivalent definition of FPRAS, which only considers the special confidence error <math>\delta=\frac{1}{3}</math>. Apparently, this is a more natural way to relax the FPTAS to randomized algorithms as it mimics the way of generalizing class '''P''' to '''BPP''' for decision problems. If you are familiar with the Chernoff bound, it is quite easy to observe the equivalence between the following definition and the above one. And this simple definition is also easier to check (you only need to verify the case for the error <math>\delta=\frac{1}{3}</math>), which makes it more convenient to apply.
 
{{Theorem|FPRAS (equivalent simple version)|
:Consider a computational problem <math>f:\{0,1\}^*\to \mathbb{Z}^+</math> whose outputs are positive integers.


Naively applying the Monte Carlo method will not give a good answer. Suppose that there are <math>n</math> variables. Let <math>U=\{\mathrm{true},\mathrm{false}\}^n</math> be the set of all truth assignments of the <math>n</math> variables. Let <math>G=\{x\in U\mid \phi(x)=\mathrm{true}\}</math> be the set of satisfying assignments for <math>\phi</math>. The straightforward use of Monte Carlo method samples <math>N</math> assignments from <math>U</math> and check how many of them satisfy <math>\phi</math>. This algorithm fails when <math>|G|/|U|</math> is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.
:A '''FPRAS (fully polynomial-time randomized approximation scheme)''' for <math>f</math> is a randomized algorithm <math>\mathcal{A}</math> that takes the following inputs:
:*an input instance <math>x</math> of problem <math>f</math>, which we use <math>n=|x|</math> to denote its length;
:*a real number <math>0<\epsilon<1</math>;
:and in time polynomial in <math>n</math> and <math>\frac{1}{\epsilon}</math>, returns an output <math>\mathcal{A}(x,\epsilon)</math> such that
::<math>
\Pr\left[(1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon)\le (1+\epsilon)f(x)\right]\ge \frac{2}{3}.
</math>
}}


The Monte Carlo method for the parameter estimation problem would give us an FPRAS if the sampler and membership oracle are poly-time realizable and the fraction <math>\alpha=\frac{|G|}{|U|}</math> is bounded from below by a <math>\frac{1}{\mathrm{poly}(n)}</math>.


;The union of sets problem
However, for the DNF counting problem, naively applying the Monte Carlo method (treat the set of all <math>2^n</math> truth assignments as <math>U</math> and the set of DNF solutions as <math>G</math>) will not give a good algorithm. The problem is that the fraction <math>\alpha=\frac{|G|}{|U|}</math> can be exponentially small:
:Suppose that there are <math>n</math> variables. Let <math>U=\{\mathrm{true},\mathrm{false}\}^n</math> be the set of all truth assignments of the <math>n</math> variables. Let <math>G=\{x\in U\mid \phi(x)=\mathrm{true}\}</math> be the set of satisfying assignments for <math>\phi</math>. The straightforward use of Monte Carlo method samples <math>N</math> assignments from <math>U</math> and check how many of them satisfy <math>\phi</math>. This algorithm fails when <math>|G|/|U|</math> is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.
 
So instead of naively considering the space of satisfying solutions and the space of all solutions, we represent the space of satisfying solutions to a DNF formula as a union of sets of satisfying solutions to the individual clauses, and count this union of sets. This gives us an abstract problem called the union of sets problem.
 
==The union of sets problem==
We reformulate the DNF counting problem in a more abstract framework, called the '''union of sets''' problem.  
We reformulate the DNF counting problem in a more abstract framework, called the '''union of sets''' problem.  


Let <math>V</math> be a finite universe. We are given <math>m</math> subsets <math>H_1,H_2,\ldots,H_m\subseteq V</math>. The following assumptions hold:
{{Theorem|The union of sets problem|
*For all <math>i</math>, <math>|H_i|</math> is computable in poly-time.
:Let <math>V</math> be a finite universe.  
*It is possible to sample uniformly from each individual <math>H_i</math>.
:'''Input''': <math>m</math> subsets <math>H_1,H_2,\ldots,H_m\subseteq V</math> of the universe;
*For any <math>x\in V</math>, it can be determined in poly-time whether <math>x\in H_i</math>.
:'''Output''': the size of the union set <math>H=\bigcup_{i=1}^m H_i</math>
}}
 
To make the algorithm work, we make the following assumptions:
# For all <math>i</math>, the value of <math>|H_i|</math> can be computed efficiently.
# It is possible to sample uniformly from each individual <math>H_i</math>.
# For any <math>x\in V</math>, it can be determined efficiently whether <math>x\in H_i</math>.


The goal is to compute the size of <math>H=\bigcup_{i=1}^m H_i</math>.
DNF counting can be easily put into this general framework: Suppose that the DNF formula is a disjunction (<math>\vee</math>) of <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math> on <math>n</math> variables, where each clause <math>C_i</math> is a conjunction (<math>\wedge</math>) of <math>k_i</math> literals. Without loss of generality, we may assume that in each clause, each variable appears at most once. The satisfying solutions to this DNF can be represented as a union of <math>m</math> sets as follows:
* <math>V=\{\text{true},\text{false}\}^n</math> is the set of all <math>2^n</math> truth assignments.
* Each <math>H_i=\{x\in\{\text{true},\text{false}\}^n\mid x\mbox{ satisfies }C_i\}</math> is the set of satisfying assignments for the <math>i</math>-th clause <math>C_i</math> of the DNF formula.
Then the union of sets <math>H=\bigcup_i H_i</math> gives the set of satisfying assignments for the DNF formula <math>C_1\vee C_2\vee\cdots \vee C_m</math>. Its size <math>|H|</math> gives the number of satisfying assignments for the DNF.


DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula <math>\phi</math> is defined on <math>n</math> variables, and <math>\phi</math> contains <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math>, where clause <math>C_i</math> has <math>k_i</math> literals. Without loss of generality, we assume that in each clause, each variable appears at most once.
Furthermore, the three assumptions above can also be guaranteed in this construction:
* <math>V</math> is the set of all assignments.
# Since each clause <math>C_i</math> is a conjunction of <math>k_i</math> literals, it is easy to see that <math>|H_i|=2^{n-k_i}</math>, which is obviously efficiently computable.
*Each <math>H_i</math> is the set of satisfying assignments for the <math>i</math>-th clause <math>C_i</math> of the DNF formular <math>\phi</math>. Then the union of sets <math>H=\bigcup_i H_i</math> gives the set of satisfying assignments for <math>\phi</math>.
# Sampling from each <math>H_i</math> is very easy: we just fix the assignments of the <math>k_i</math> variables involved in the corresponding clause <math>C_i</math> to satisfy <math>C_i</math> (there is exactly one such choice), and sample uniformly and independently the rest <math>(n-k_i)</math> variable assignments.
* Each clause <math>C_i</math> is a conjunction (AND) of literals. It is not hard to see that <math>|H_i|=2^{n-k_i}</math>, which is efficiently computable.
# For each assignment <math>x</math>, it is easy to check whether it satisfies a clause <math>C_i</math>, thus it is easy to determine whether <math>x\in H_i</math>.
* Sampling from an <math>H_i</math> is simple: we just fix the assignments of the <math>k_i</math> literals of that clause, and sample uniformly and independently the rest <math>(n-k_i)</math> variable assignments.
* For each assignment <math>x</math>, it is easy to check whether it satisfies a clause <math>C_i</math>, thus it is easy to determine whether <math>x\in H_i</math>.


==The coverage algorithm==
==The coverage algorithm==
Line 68: Line 131:
where <math>\uplus</math> denotes the multiset union. It is more convenient to define <math>U</math> as the set
where <math>\uplus</math> denotes the multiset union. It is more convenient to define <math>U</math> as the set
:<math>U=\{(x,i)\mid x\in H_i\}</math>.
:<math>U=\{(x,i)\mid x\in H_i\}</math>.
For each <math>x\in H</math>, there may be more than one instances of <math>(x,i)\in U</math>. We can choose a unique representative among the multiple instances <math>(x,i)\in U</math> for the same <math>x\in H</math>, by choosing the <math>(x,i)</math> with the minimum <math>i</math>, and form a set <math>G</math>.
For each <math>x\in H</math>, there may be more than one instances of <math>(x,i)\in U</math>. We may choose the <math>(x,i)</math> with the minimum <math>i</math>, and collect all such <math>(x,i)</math> to form a set <math>G</math>.


Formally, <math>G=\{(x,i)\in U\mid \forall (x,j)\in U, j\le i\}</math>. Every <math>x\in H</math> corresponds to a unique <math>(x,i)\in G</math> where <math>i</math> is the smallest among <math>x\in H_i</math>.
Formally, the set <math>G</math> is defined as
:<math>G=\{(x,i)\in U\mid \forall (x,j)\in U, j\ge i\}</math>.  
Every <math>x\in H</math> corresponds to a unique <math>(x,i)\in G</math> where <math>i</math> is the smallest among <math>x\in H_i</math>.


It is obvious that <math>G\subseteq U</math> and
The following proposition is obvious  
:<math>|G|=|H|</math>.
{{Theorem|Proposition|
#<math>G\subseteq U</math>;
#<math>|G|=|H|\,</math>.
}}


Therefore, estimation of <math>|H|</math> is reduced to estimation of <math>|G|</math> with <math>G\subseteq U</math>. Then <math>|G|</math> can have an <math>\epsilon</math>-approximation with probability <math>(1-\delta)</math> in poly-time, if we can uniformly sample from <math>U</math> and <math>|G|/|U|</math> is suitably small.
Therefore, the estimation of the size of the union set <math>H\bigcup_{i=1}^mH_i</math> is reduced to the estimation of the size of the subset <math>G\subseteq U</math>, where <math>U</math> is the multiset union of <math>H_1, H_2,\ldots, H_m</math> and <math>H</math>. Note that now the fraction <math>\alpha=|G|/|U|</math> becomes large enough. As long as we can implement a uniform sampler for <math>U</math> and a membership oracle for <math>G</math>, we can apply the estimator theorem to get a good approximation algorithm for <math>|G|=|H|</math>.


An uniform sample from <math>U</math> can be implemented as follows:
An uniform sample from <math>U</math> can be implemented as follows.
{{Theorem|Uniform sampler for <math>U</math>|
* generate an <math>i\in\{1,2,\ldots,m\}</math> with probability <math>\frac{|H_i|}{\sum_{i=1}^m|H_i|}</math>;
* generate an <math>i\in\{1,2,\ldots,m\}</math> with probability <math>\frac{|H_i|}{\sum_{i=1}^m|H_i|}</math>;
* uniformly sample an <math>x\in H_i</math>, and return <math>(x,i)</math>.
* uniformly sample an <math>x\in H_i</math>, and return <math>(x,i)</math>.
}}
It is easy to see that this gives a uniform member of <math>U</math>. And this uniform sampler is efficient as long as each <math>|H_i|</math> is efficiently computable and we have an efficient uniform sample for each <math>H_i</math>, both of which are assumed.


It is easy to see that this gives a uniform member of <math>U</math>. The above sampling procedure is poly-time because each <math>|H_i|</math> can be computed in poly-time, and sampling uniformly from each <math>H_i</math> is poly-time.
It is also easy to implement the membership oracle for <math>G</math>:
{{Theorem|Membership oracle for <math>G</math>|
upon each query <math>(x,i)</math>:
:if <math>i</math> is the smallest among all <math>j</math> that <math>x\in H_j</math>
:: return "yes";
:else return "no";
}}
By definition of <math>G</math>, it is easy to see this membership oracle works, and it is efficient if the membership of each <math>H_i</math> can be efficiently determined, which is also assumed.


We now only need to lower bound the ratio
We now only need to lower bound the fraction
:<math>\alpha=\frac{|G|}{|U|}</math>.
:<math>\alpha=\frac{|G|}{|U|}</math>.


We claim that
{{Theorem|Proposition|
:<math>\alpha\ge\frac{1}{m}</math>.
:<math>\alpha\ge\frac{1}{m}</math>.
It is easy to see this, because each <math>x\in H</math> has at most <math>m</math> instances of <math>(x,i)</math> in <math>U</math>, and we already know that <math>|G|=|H|</math>.
}}
{{Proof|
For every <math>x\in H</math>, there are at most <math>m</math> different pairs <math>(x,i)</math> in <math>U</math>, thus <math>|U|\le m|H|</math>. And we already know that <math>|G|=|H|</math>.
}}
 
Applying the estimator theorem, this gives us an algorithm which returns an <math>\epsilon</math>-approximation of <math>|H|</math> with probability at least <math>1-\delta</math>, by making at most <math>\frac{4m}{\epsilon^2}\ln\frac{2}{\delta}</math> calls to the uniform random sampler for <math>U</math> and the membership oracle for <math>G</math>, due to the three efficiency assumptions, both of which can be efficiently implemented. The resulting algorithm is called the '''coverage algorithm'''.
 
Applying the coverage algorithm to the DNF counting, which as we have discussed above, is a special case for the union of set problem, we have an FPRAS.
{{Theorem|Theorem (Karp-Luby, 1983)|
:There exists an FPRAS for the DNF counting problem.
}}
 
= Count Matchings =
Recall that a matching <math>M</math> in a graph <math>G(V,E)</math> is a subset <math>M\subseteq E</math> of edges such that no vertex in <math>V</math> is incident to more than one edges in <math>M</math>, i.e. a matching is an "edge-version" of independent set.
 
Let us consider a more complicated counting problem, counting the number matchings in an input graph. The problem is formally defined as follows.
:'''Input''': an indirected graph <math>G(V,E)</math>;
:'''Output''': <math>Z(G)=</math> number of matchings in <math>G</math>.
 
Note that we count all matchings, not just maximum or maximal matchings. For example, an empty set (containing no edge) is always a matching.
 
This problem is '''#P-complete'''. In fact, it is among the first few problems known to be '''#P-complete'''. The studies of the computational complexity and efficient approximation algorithms for this problem pioneer the fields of counting complexity and randomized algorithms.
 
== A self-reduction procedure ==
We show a generic self-reduction procedure due to Jerrum-Valiant-Vazirani, which reduces a counting problem to the problems of smaller instances. This technique does not just work for counting matchings, but may also work a wide range of counting version of CSPs (constraint satisfaction problem) who show self-reducibility property.
 
Fix an input graph <math>G(V,E)</math>. Let <math>Z(G)</math> be the number of matchings in <math>G</math>.
 
We consider the uniform probability distribution over all matchings in <math>G</math>. Let <math>X</math> be such a random matching sampled uniformly at random from all matchings in <math>G</math>. More formally, for any <math>M\subseteq E</math>,
:<math>
\Pr[X=M]=
\begin{cases}
\frac{1}{Z(G)} & \mbox{if }M\mbox{ is a matching in }G,\\
0 & \mbox{otherwise.}\\
\end{cases}
</math>
 
Consider the special matching <math>M=\emptyset</math>. It clearly holds that <math>\Pr[X=\emptyset]=\frac{1}{Z(G)}</math>, thus
:<math>
Z(G)=\frac{1}{\Pr[X=\emptyset]}.
</math>
Thus we can estimate <math>Z(G)</math> by estimating the probability <math>\Pr[X=\emptyset]</math>.
 
We denote <math>E=\{e_1,e_2,\ldots,e_m\}</math>, where the order of edges <math>e_1,e_2,\ldots,e_m</math> is arbitrary.
 
{|border="2" width="100%" cellspacing="4" cellpadding="3" rules="all" style="margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;"
|
:The random matching <math>X</math> can be seen as a random vector <math>X=(X_1,X_2,\ldots,X_m)</math>, where each <math>X_i</math> indicates whether <math>e_i</math> appears in the random matching <math>X</math>.
Thus the probability <math>\Pr[X=\emptyset]</math> is usually called a '''''joint probability''''' of the random vector <math>X</math>.
The joint probability can be represented as a product of '''''marginal probabilities''''' by the chain rule.
:<math>
\Pr[X=(0,0,\ldots,0)] = \prod_{i=1}^m\Pr[X_i=0\mid \forall j<i,X_j=0].
</math>
|}
 
Back to the matching problem, note that the event <math>X=\emptyset</math> is equivalent to the event that <math>\forall i, e_i\not\in X</math>.
Due to the chain rule,  we have
:<math>
\begin{align}
\Pr[X=\emptyset]
&=
\Pr[\forall i, e_i\not\in X]\\
&=
\prod_{i=1}^m\Pr[e_i\not\in X\mid \forall j<i,e_i\not\in X]\\
&=
\prod_{i=1}^m(1-\Pr[e_i\in X\mid \forall j<i,e_i\not\in X]).
\end{align}
</math>


Due to the estimator theorem, this needs <math>\frac{4m}{\epsilon^2}\ln\frac{2}{\delta}</math> uniform random samples from <math>U</math>.
For a graph <math>G(V,E)</math> and an edge <math>e</math>, we use
:<math>
P_G(e)=\Pr[e\in X]
</math>
to denote the marginal probability of <math>e</math> being occupied by a uniformly random matching <math>X</math> in <math>G</math>.
 
{{Theorem|Observation (self-reduction)|
:Let <math>G_i=G\setminus\{e_1,e_2,\ldots,e_{i-1}\}</math> be the subgraph of <math>G</math> obtained from removing the first <math>(i-1)</math> edges from <math>G</math>.
:It holds that
::<math>P_{G_i}(e_i)=\Pr[e_i\in X\mid \forall j<i,e_i\not\in X]</math>
}}
 
This observation can be easily proved by the following "telescopic product" version of the chain rule. Note that with the notation <math>G_i</math> we have <math>G_1=G</math> and <math>G_{m+1}=\emptyset</math> to be the empty graph.
:<math>
\begin{align}
\Pr[X=\emptyset]
&=
\frac{1}{Z(G)}
=
\frac{Z(G_{m+1})}{Z(G_1)}
=
\prod_{i=1}^m\frac{Z(G_{i+1})}{Z(G_i)}.
\end{align}
</math>
By definition of conditional probability, we have
:<math>\frac{Z(G_{i+1})}{Z(G_i)}=\frac{|\{\text{matchings formed by edges }e_{i+1},e_{e+2},\ldots,e_{m}\}|}{|\{\text{matchings formed by edges }e_i,e_{i+1},\ldots,e_{m}\}|}
=\Pr[e_i\not\in X\mid \forall j<i,e_j\not\in X].
</math>
On the other hand, it is also obvious that
:<math>\frac{Z(G_{i+1})}{Z(G_i)}=\frac{|\{\text{matchings in }G_i\text{ that not contains }e_{i}\}|}{|\{\text{matchings in }G_i\}|}
=1-P_{G_i}(e_i).
</math>
So the above observation is proved.
 
==From counting to sampling ==
With the above observation and recalling that <math>Z(G)=\frac{1}{\Pr[X=\emptyset]}</math>, we have
:<math>
Z(G)=\frac{1}{\Pr[X=\emptyset]}=\prod_{i=1}^m\frac{1}{(1-P_{G_i}(e_i))}.
</math>
 
The number of matchings in graph <math>G</math>, <math>Z(G)</math>, can be efficiently approximated if the marginal probabilities <math>P_{G_i}(e_i)</math> can be efficiently approximated, where the latter can be achieved by doing the statistical experiments by sampling uniform random matchings in graph <math>G_i</math>.
 
This generic process is described in more details as follows:
# If we can sample a uniform (or ''almost uniform'', whose meaning to be specified later) random matching from any graph <math>G</math> in polynomial time, then due to the estimator theorem, we can have an '''<math>(\epsilon,\delta)</math>-estimator''' of each <math>P_{G_i}(e_i)</math>. This estimator runs in time polynomial in <math>n=|V|</math>, <math>\frac{1}{\epsilon}</math>, and <math>\log\frac{1}{\delta}</math>, and returns an estimation <math>\hat{P}_i</math> for each <math>P_{G_i}(e_i)</math> such that <math>|\hat{P}_i-P_{G_i}(e_i)|>\epsilon</math> holds with probability at most <math>\delta</math>.
# The above <math>(\epsilon,\delta)</math>-estimators are efficient approximations of marginal probabilities with small ''additive errors''. To get an FPRAS for <math>Z(G)</math>, whose error is multiplicative, we still need that each <math>P_{G_i}(e_i)</math> is bounded away from 1. i.e. there is a constant <math>c>0</math> such that it always holds that <math>P_{G_i}(e_i)\le 1-c</math>. For matching, this is always guaranteed. We have the following lemma.
{{Theorem|Lemma|
:For any graph <math>G</math> and edge <math>e</math>, it always holds that <math>P_G(e)\le \frac{1}{2}</math>.
}}
:This lemma holds for quite obvious reasons: If <math>e</math> is an isolated edge (with both endpoints of degree 1), then it is covered by an uniformly random matching <math>X</math> with probability exactly 1/2. And more edges sharing vertices with <math>e</math> can not increase this probability.


This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.
With the above lemma,  the task of designing a FPRAS for the number of matchings in an input graph is reduced to the task of uniformly sampling a matching in the input graph in polynomial time. In fact, precisely uniform sampling such complicated structure is quite difficult, but there are ways to ''almost uniform sampling'' matchings or other complicated structures in polynomial time. This is achieved by '''random walks''' in a very large but fast converging Markov chain.

Latest revision as of 15:50, 12 May 2014

Parameter Estimation

Consider the following abstract problem of parameter estimation.

Let [math]\displaystyle{ U }[/math] be a finite set of known size, and let [math]\displaystyle{ G\subseteq U }[/math]. We want to estimate the parameter [math]\displaystyle{ |G| }[/math], i.e. the size of [math]\displaystyle{ G }[/math], or equivalently, the density (or frequency) [math]\displaystyle{ \frac{|G|}{|U|} }[/math].

We assume two devices:

  • A uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], which uniformly and independently samples a member of [math]\displaystyle{ U }[/math] upon each calling.
  • A membership oracle of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \mathcal{O} }[/math]. Given as the input an [math]\displaystyle{ x\in U }[/math], [math]\displaystyle{ \mathcal{O}(x) }[/math] indicates whether or not [math]\displaystyle{ x }[/math] is a member of [math]\displaystyle{ G }[/math].

Equipped by [math]\displaystyle{ \mathcal{U} }[/math] and [math]\displaystyle{ \mathcal{O} }[/math], we can have the following Monte Carlo method:

  • Choose [math]\displaystyle{ N }[/math] independent samples from [math]\displaystyle{ U }[/math] by the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], represented by the random variables [math]\displaystyle{ X_1,X_2,\ldots, X_N }[/math].
  • Let [math]\displaystyle{ Y_i }[/math] be the indicator random variable defined as [math]\displaystyle{ Y_i=\mathcal{O}(X_i) }[/math], namely, [math]\displaystyle{ Y_i }[/math] indicates whether [math]\displaystyle{ X_i\in G }[/math].
  • Define the estimator random variable for [math]\displaystyle{ |G| }[/math] as
[math]\displaystyle{ Z=\frac{|U|}{N}\sum_{i=1}^N Y_i }[/math]
  • Define the estimator random variable for [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math] as
[math]\displaystyle{ \overline{Y}=\frac{1}{N}\sum_{i=1}^N Y_i }[/math].

It is easy to see that [math]\displaystyle{ \mathbf{E}[Z]=|G| }[/math] and [math]\displaystyle{ \mathbf{E}[\overline{Y}]=\alpha }[/math]. We might hope that with high probability the values of [math]\displaystyle{ Z }[/math] and [math]\displaystyle{ \overline{Y} }[/math] are close to their respective expectations. Formally, [math]\displaystyle{ Z }[/math] is called an [math]\displaystyle{ \epsilon }[/math]-approximation of [math]\displaystyle{ |G| }[/math] if

[math]\displaystyle{ (1-\epsilon)|G|\le Z\le (1+\epsilon)|G|. }[/math]

And [math]\displaystyle{ \overline{Y} }[/math] is called an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator for [math]\displaystyle{ \alpha }[/math] if

[math]\displaystyle{ \Pr[|\overline{Y}-\alpha|\le \epsilon]\ge 1-\delta. }[/math]

The following general theorem bounds the reliance between the accuracy of estimation and the number of samples.

Theorem (estimator theorem)
Let [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. The following holds:
1. [math]\displaystyle{ Z }[/math] is an [math]\displaystyle{ \epsilon }[/math]-approximation to [math]\displaystyle{ |G| }[/math] with probability at least [math]\displaystyle{ 1-\delta }[/math] if
[math]\displaystyle{ N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta} }[/math].
2. [math]\displaystyle{ \overline{Y} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator for [math]\displaystyle{ \alpha }[/math] if
[math]\displaystyle{ N\ge\frac{4}{\epsilon^2}\ln\frac{2}{\delta} }[/math].
Proof.

Recall that [math]\displaystyle{ Y_i }[/math] indicates whether the [math]\displaystyle{ i }[/math]-th sample is in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ Y=\sum_{i=1}^NY_i }[/math]. Then we have [math]\displaystyle{ Z=\frac{|U|}{N}Y }[/math], and hence the event [math]\displaystyle{ (1-\epsilon)|G|\le Z\le (1+\epsilon)|G| }[/math] is equivalent to that [math]\displaystyle{ (1-\epsilon)\frac{|G|}{|U|}N\le Y\le (1+\epsilon)\frac{|G|}{|U|}N }[/math]. Note that each [math]\displaystyle{ Y_i }[/math] is a Bernoulli trial that succeeds with probability [math]\displaystyle{ \frac{|G|}{|U|} }[/math], thus [math]\displaystyle{ \mathbb{E}[Y]=\frac{|G|}{|U|}N }[/math]. Then the rest is due to Chernoff bound.

[math]\displaystyle{ \square }[/math]

A counting algorithm for the set [math]\displaystyle{ G }[/math] has to deal with the following three issues:

  1. Implement the membership oracle [math]\displaystyle{ \mathcal{O} }[/math]. This is usually straightforward, or assumed by the model.
  2. Implement the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math]. This can be straightforward or highly nontrivial, depending on the problem.
  3. Deal with exponentially small [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. This requires us to cleverly choose the universe [math]\displaystyle{ U }[/math]. And this part usually uses some beautiful ideas.

Counting DNFs

A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:

[math]\displaystyle{ (x_1\wedge \overline{x_2}\wedge x_3)\vee(x_2\wedge x_4)\vee(\overline{x_1}\wedge x_3\wedge x_4) }[/math].

Note the difference from the conjunctive normal forms (CNF).

Given a DNF formular [math]\displaystyle{ \phi }[/math] as the input, the problem is to count the number of satisfying assignments of [math]\displaystyle{ \phi }[/math]. This problem is #P-complete. So we do not expect to have a polynomial-time algorithm for exact solving the problem. Instead we are seeking for approximation algorithms. Interestingly, even though the computational complexity of the problem is so high, it admits some very nice approximation algorithm which has very good approximation ratio (theoretically speaking). We need to introduce some general concepts for approximation algorithms.

FPTAS (Fully Polynomial-Time Approximation Scheme)
Consider a computational problem [math]\displaystyle{ f:\{0,1\}^*\to \mathbb{Z}^+ }[/math] whose outputs are positive integers.
A FPTAS (fully polynomial-time approximation scheme) for [math]\displaystyle{ f }[/math] is an algorithm [math]\displaystyle{ \mathcal{A} }[/math] that takes the following inputs:
  • an input instance [math]\displaystyle{ x }[/math] of problem [math]\displaystyle{ f }[/math], which we use [math]\displaystyle{ n=|x| }[/math] to denote its length;
  • a real number [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math];
and in time polynomial in both [math]\displaystyle{ n }[/math] and [math]\displaystyle{ \frac{1}{\epsilon} }[/math], returns an output [math]\displaystyle{ \mathcal{A}(x,\epsilon) }[/math] such that
[math]\displaystyle{ (1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon)\le (1+\epsilon)f(x). }[/math]

The adverb "fully" stresses the polynomial reliance on not just the input size [math]\displaystyle{ n }[/math] but also the approximation parameter [math]\displaystyle{ \frac{1}{\epsilon} }[/math]. An FPTAS may approximate the true value of the output within any accuracy in polynomial time, though the polynomial becomes larger when the approximation is more accurate. Moreover, the running time grows just polynomially with repeat to the accuracy bounds. Theoretically speaking, FPTAS is the best we can get for computational hard problems, though in practice sometimes the polynomial bound might be too large even for a moderate [math]\displaystyle{ \epsilon }[/math].

The FPRAS (fully polynomial-time randomized approximation scheme), on the other hand, is the randomized relaxation of FPTAS.

FPRAS (Fully Polynomial-time Randomized Approximation Scheme)
Consider a computational problem [math]\displaystyle{ f:\{0,1\}^*\to \mathbb{Z}^+ }[/math] whose outputs are positive integers.
A FPRAS (fully polynomial-time randomized approximation scheme) for [math]\displaystyle{ f }[/math] is a randomized algorithm [math]\displaystyle{ \mathcal{A} }[/math] that takes the following inputs:
  • an input instance [math]\displaystyle{ x }[/math] of problem [math]\displaystyle{ f }[/math], which we use [math]\displaystyle{ n=|x| }[/math] to denote its length;
  • two real numbers [math]\displaystyle{ 0\lt \epsilon,\delta\lt 1 }[/math];
and in time polynomial in [math]\displaystyle{ n }[/math], [math]\displaystyle{ \frac{1}{\epsilon} }[/math], and [math]\displaystyle{ \log\frac{1}{\delta} }[/math], returns an output [math]\displaystyle{ \mathcal{A}(x,\epsilon,\delta) }[/math] such that
[math]\displaystyle{ \Pr\left[(1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon,\delta)\le (1+\epsilon)f(x)\right]\ge 1-\delta. }[/math]

Note that the reliance on the confidence error [math]\displaystyle{ \delta }[/math] is polynomial in [math]\displaystyle{ \log\frac{1}{\delta} }[/math], instead of [math]\displaystyle{ \frac{1}{\delta} }[/math]. This is because due to the Chernoff bound, the confidence error can be reduced in an exponential rate by independent repetitions. This motivates the following simpler but equivalent definition of FPRAS, which only considers the special confidence error [math]\displaystyle{ \delta=\frac{1}{3} }[/math]. Apparently, this is a more natural way to relax the FPTAS to randomized algorithms as it mimics the way of generalizing class P to BPP for decision problems. If you are familiar with the Chernoff bound, it is quite easy to observe the equivalence between the following definition and the above one. And this simple definition is also easier to check (you only need to verify the case for the error [math]\displaystyle{ \delta=\frac{1}{3} }[/math]), which makes it more convenient to apply.

FPRAS (equivalent simple version)
Consider a computational problem [math]\displaystyle{ f:\{0,1\}^*\to \mathbb{Z}^+ }[/math] whose outputs are positive integers.
A FPRAS (fully polynomial-time randomized approximation scheme) for [math]\displaystyle{ f }[/math] is a randomized algorithm [math]\displaystyle{ \mathcal{A} }[/math] that takes the following inputs:
  • an input instance [math]\displaystyle{ x }[/math] of problem [math]\displaystyle{ f }[/math], which we use [math]\displaystyle{ n=|x| }[/math] to denote its length;
  • a real number [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math];
and in time polynomial in [math]\displaystyle{ n }[/math] and [math]\displaystyle{ \frac{1}{\epsilon} }[/math], returns an output [math]\displaystyle{ \mathcal{A}(x,\epsilon) }[/math] such that
[math]\displaystyle{ \Pr\left[(1-\epsilon)f(x)\le\mathcal{A}(x,\epsilon)\le (1+\epsilon)f(x)\right]\ge \frac{2}{3}. }[/math]

The Monte Carlo method for the parameter estimation problem would give us an FPRAS if the sampler and membership oracle are poly-time realizable and the fraction [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math] is bounded from below by a [math]\displaystyle{ \frac{1}{\mathrm{poly}(n)} }[/math].

However, for the DNF counting problem, naively applying the Monte Carlo method (treat the set of all [math]\displaystyle{ 2^n }[/math] truth assignments as [math]\displaystyle{ U }[/math] and the set of DNF solutions as [math]\displaystyle{ G }[/math]) will not give a good algorithm. The problem is that the fraction [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math] can be exponentially small:

Suppose that there are [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ U=\{\mathrm{true},\mathrm{false}\}^n }[/math] be the set of all truth assignments of the [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ G=\{x\in U\mid \phi(x)=\mathrm{true}\} }[/math] be the set of satisfying assignments for [math]\displaystyle{ \phi }[/math]. The straightforward use of Monte Carlo method samples [math]\displaystyle{ N }[/math] assignments from [math]\displaystyle{ U }[/math] and check how many of them satisfy [math]\displaystyle{ \phi }[/math]. This algorithm fails when [math]\displaystyle{ |G|/|U| }[/math] is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.

So instead of naively considering the space of satisfying solutions and the space of all solutions, we represent the space of satisfying solutions to a DNF formula as a union of sets of satisfying solutions to the individual clauses, and count this union of sets. This gives us an abstract problem called the union of sets problem.

The union of sets problem

We reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.

The union of sets problem
Let [math]\displaystyle{ V }[/math] be a finite universe.
Input: [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ H_1,H_2,\ldots,H_m\subseteq V }[/math] of the universe;
Output: the size of the union set [math]\displaystyle{ H=\bigcup_{i=1}^m H_i }[/math]

To make the algorithm work, we make the following assumptions:

  1. For all [math]\displaystyle{ i }[/math], the value of [math]\displaystyle{ |H_i| }[/math] can be computed efficiently.
  2. It is possible to sample uniformly from each individual [math]\displaystyle{ H_i }[/math].
  3. For any [math]\displaystyle{ x\in V }[/math], it can be determined efficiently whether [math]\displaystyle{ x\in H_i }[/math].

DNF counting can be easily put into this general framework: Suppose that the DNF formula is a disjunction ([math]\displaystyle{ \vee }[/math]) of [math]\displaystyle{ m }[/math] clauses [math]\displaystyle{ C_1,C_2,\ldots,C_m }[/math] on [math]\displaystyle{ n }[/math] variables, where each clause [math]\displaystyle{ C_i }[/math] is a conjunction ([math]\displaystyle{ \wedge }[/math]) of [math]\displaystyle{ k_i }[/math] literals. Without loss of generality, we may assume that in each clause, each variable appears at most once. The satisfying solutions to this DNF can be represented as a union of [math]\displaystyle{ m }[/math] sets as follows:

  • [math]\displaystyle{ V=\{\text{true},\text{false}\}^n }[/math] is the set of all [math]\displaystyle{ 2^n }[/math] truth assignments.
  • Each [math]\displaystyle{ H_i=\{x\in\{\text{true},\text{false}\}^n\mid x\mbox{ satisfies }C_i\} }[/math] is the set of satisfying assignments for the [math]\displaystyle{ i }[/math]-th clause [math]\displaystyle{ C_i }[/math] of the DNF formula.

Then the union of sets [math]\displaystyle{ H=\bigcup_i H_i }[/math] gives the set of satisfying assignments for the DNF formula [math]\displaystyle{ C_1\vee C_2\vee\cdots \vee C_m }[/math]. Its size [math]\displaystyle{ |H| }[/math] gives the number of satisfying assignments for the DNF.

Furthermore, the three assumptions above can also be guaranteed in this construction:

  1. Since each clause [math]\displaystyle{ C_i }[/math] is a conjunction of [math]\displaystyle{ k_i }[/math] literals, it is easy to see that [math]\displaystyle{ |H_i|=2^{n-k_i} }[/math], which is obviously efficiently computable.
  2. Sampling from each [math]\displaystyle{ H_i }[/math] is very easy: we just fix the assignments of the [math]\displaystyle{ k_i }[/math] variables involved in the corresponding clause [math]\displaystyle{ C_i }[/math] to satisfy [math]\displaystyle{ C_i }[/math] (there is exactly one such choice), and sample uniformly and independently the rest [math]\displaystyle{ (n-k_i) }[/math] variable assignments.
  3. For each assignment [math]\displaystyle{ x }[/math], it is easy to check whether it satisfies a clause [math]\displaystyle{ C_i }[/math], thus it is easy to determine whether [math]\displaystyle{ x\in H_i }[/math].

The coverage algorithm

We now introduce the coverage algorithm for the union of sets problem.

Consider the multiset [math]\displaystyle{ U }[/math] defined by

[math]\displaystyle{ U=H_1\uplus H_2\uplus\cdots \uplus H_m }[/math],

where [math]\displaystyle{ \uplus }[/math] denotes the multiset union. It is more convenient to define [math]\displaystyle{ U }[/math] as the set

[math]\displaystyle{ U=\{(x,i)\mid x\in H_i\} }[/math].

For each [math]\displaystyle{ x\in H }[/math], there may be more than one instances of [math]\displaystyle{ (x,i)\in U }[/math]. We may choose the [math]\displaystyle{ (x,i) }[/math] with the minimum [math]\displaystyle{ i }[/math], and collect all such [math]\displaystyle{ (x,i) }[/math] to form a set [math]\displaystyle{ G }[/math].

Formally, the set [math]\displaystyle{ G }[/math] is defined as

[math]\displaystyle{ G=\{(x,i)\in U\mid \forall (x,j)\in U, j\ge i\} }[/math].

Every [math]\displaystyle{ x\in H }[/math] corresponds to a unique [math]\displaystyle{ (x,i)\in G }[/math] where [math]\displaystyle{ i }[/math] is the smallest among [math]\displaystyle{ x\in H_i }[/math].

The following proposition is obvious

Proposition
  1. [math]\displaystyle{ G\subseteq U }[/math];
  2. [math]\displaystyle{ |G|=|H|\, }[/math].

Therefore, the estimation of the size of the union set [math]\displaystyle{ H\bigcup_{i=1}^mH_i }[/math] is reduced to the estimation of the size of the subset [math]\displaystyle{ G\subseteq U }[/math], where [math]\displaystyle{ U }[/math] is the multiset union of [math]\displaystyle{ H_1, H_2,\ldots, H_m }[/math] and [math]\displaystyle{ H }[/math]. Note that now the fraction [math]\displaystyle{ \alpha=|G|/|U| }[/math] becomes large enough. As long as we can implement a uniform sampler for [math]\displaystyle{ U }[/math] and a membership oracle for [math]\displaystyle{ G }[/math], we can apply the estimator theorem to get a good approximation algorithm for [math]\displaystyle{ |G|=|H| }[/math].

An uniform sample from [math]\displaystyle{ U }[/math] can be implemented as follows.

Uniform sampler for [math]\displaystyle{ U }[/math]
  • generate an [math]\displaystyle{ i\in\{1,2,\ldots,m\} }[/math] with probability [math]\displaystyle{ \frac{|H_i|}{\sum_{i=1}^m|H_i|} }[/math];
  • uniformly sample an [math]\displaystyle{ x\in H_i }[/math], and return [math]\displaystyle{ (x,i) }[/math].

It is easy to see that this gives a uniform member of [math]\displaystyle{ U }[/math]. And this uniform sampler is efficient as long as each [math]\displaystyle{ |H_i| }[/math] is efficiently computable and we have an efficient uniform sample for each [math]\displaystyle{ H_i }[/math], both of which are assumed.

It is also easy to implement the membership oracle for [math]\displaystyle{ G }[/math]:

Membership oracle for [math]\displaystyle{ G }[/math]

upon each query [math]\displaystyle{ (x,i) }[/math]:

if [math]\displaystyle{ i }[/math] is the smallest among all [math]\displaystyle{ j }[/math] that [math]\displaystyle{ x\in H_j }[/math]
return "yes";
else return "no";

By definition of [math]\displaystyle{ G }[/math], it is easy to see this membership oracle works, and it is efficient if the membership of each [math]\displaystyle{ H_i }[/math] can be efficiently determined, which is also assumed.

We now only need to lower bound the fraction

[math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math].
Proposition
[math]\displaystyle{ \alpha\ge\frac{1}{m} }[/math].
Proof.

For every [math]\displaystyle{ x\in H }[/math], there are at most [math]\displaystyle{ m }[/math] different pairs [math]\displaystyle{ (x,i) }[/math] in [math]\displaystyle{ U }[/math], thus [math]\displaystyle{ |U|\le m|H| }[/math]. And we already know that [math]\displaystyle{ |G|=|H| }[/math].

[math]\displaystyle{ \square }[/math]

Applying the estimator theorem, this gives us an algorithm which returns an [math]\displaystyle{ \epsilon }[/math]-approximation of [math]\displaystyle{ |H| }[/math] with probability at least [math]\displaystyle{ 1-\delta }[/math], by making at most [math]\displaystyle{ \frac{4m}{\epsilon^2}\ln\frac{2}{\delta} }[/math] calls to the uniform random sampler for [math]\displaystyle{ U }[/math] and the membership oracle for [math]\displaystyle{ G }[/math], due to the three efficiency assumptions, both of which can be efficiently implemented. The resulting algorithm is called the coverage algorithm.

Applying the coverage algorithm to the DNF counting, which as we have discussed above, is a special case for the union of set problem, we have an FPRAS.

Theorem (Karp-Luby, 1983)
There exists an FPRAS for the DNF counting problem.

Count Matchings

Recall that a matching [math]\displaystyle{ M }[/math] in a graph [math]\displaystyle{ G(V,E) }[/math] is a subset [math]\displaystyle{ M\subseteq E }[/math] of edges such that no vertex in [math]\displaystyle{ V }[/math] is incident to more than one edges in [math]\displaystyle{ M }[/math], i.e. a matching is an "edge-version" of independent set.

Let us consider a more complicated counting problem, counting the number matchings in an input graph. The problem is formally defined as follows.

Input: an indirected graph [math]\displaystyle{ G(V,E) }[/math];
Output: [math]\displaystyle{ Z(G)= }[/math] number of matchings in [math]\displaystyle{ G }[/math].

Note that we count all matchings, not just maximum or maximal matchings. For example, an empty set (containing no edge) is always a matching.

This problem is #P-complete. In fact, it is among the first few problems known to be #P-complete. The studies of the computational complexity and efficient approximation algorithms for this problem pioneer the fields of counting complexity and randomized algorithms.

A self-reduction procedure

We show a generic self-reduction procedure due to Jerrum-Valiant-Vazirani, which reduces a counting problem to the problems of smaller instances. This technique does not just work for counting matchings, but may also work a wide range of counting version of CSPs (constraint satisfaction problem) who show self-reducibility property.

Fix an input graph [math]\displaystyle{ G(V,E) }[/math]. Let [math]\displaystyle{ Z(G) }[/math] be the number of matchings in [math]\displaystyle{ G }[/math].

We consider the uniform probability distribution over all matchings in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ X }[/math] be such a random matching sampled uniformly at random from all matchings in [math]\displaystyle{ G }[/math]. More formally, for any [math]\displaystyle{ M\subseteq E }[/math],

[math]\displaystyle{ \Pr[X=M]= \begin{cases} \frac{1}{Z(G)} & \mbox{if }M\mbox{ is a matching in }G,\\ 0 & \mbox{otherwise.}\\ \end{cases} }[/math]

Consider the special matching [math]\displaystyle{ M=\emptyset }[/math]. It clearly holds that [math]\displaystyle{ \Pr[X=\emptyset]=\frac{1}{Z(G)} }[/math], thus

[math]\displaystyle{ Z(G)=\frac{1}{\Pr[X=\emptyset]}. }[/math]

Thus we can estimate [math]\displaystyle{ Z(G) }[/math] by estimating the probability [math]\displaystyle{ \Pr[X=\emptyset] }[/math].

We denote [math]\displaystyle{ E=\{e_1,e_2,\ldots,e_m\} }[/math], where the order of edges [math]\displaystyle{ e_1,e_2,\ldots,e_m }[/math] is arbitrary.

The random matching [math]\displaystyle{ X }[/math] can be seen as a random vector [math]\displaystyle{ X=(X_1,X_2,\ldots,X_m) }[/math], where each [math]\displaystyle{ X_i }[/math] indicates whether [math]\displaystyle{ e_i }[/math] appears in the random matching [math]\displaystyle{ X }[/math].

Thus the probability [math]\displaystyle{ \Pr[X=\emptyset] }[/math] is usually called a joint probability of the random vector [math]\displaystyle{ X }[/math]. The joint probability can be represented as a product of marginal probabilities by the chain rule.

[math]\displaystyle{ \Pr[X=(0,0,\ldots,0)] = \prod_{i=1}^m\Pr[X_i=0\mid \forall j\lt i,X_j=0]. }[/math]

Back to the matching problem, note that the event [math]\displaystyle{ X=\emptyset }[/math] is equivalent to the event that [math]\displaystyle{ \forall i, e_i\not\in X }[/math]. Due to the chain rule, we have

[math]\displaystyle{ \begin{align} \Pr[X=\emptyset] &= \Pr[\forall i, e_i\not\in X]\\ &= \prod_{i=1}^m\Pr[e_i\not\in X\mid \forall j\lt i,e_i\not\in X]\\ &= \prod_{i=1}^m(1-\Pr[e_i\in X\mid \forall j\lt i,e_i\not\in X]). \end{align} }[/math]

For a graph [math]\displaystyle{ G(V,E) }[/math] and an edge [math]\displaystyle{ e }[/math], we use

[math]\displaystyle{ P_G(e)=\Pr[e\in X] }[/math]

to denote the marginal probability of [math]\displaystyle{ e }[/math] being occupied by a uniformly random matching [math]\displaystyle{ X }[/math] in [math]\displaystyle{ G }[/math].

Observation (self-reduction)
Let [math]\displaystyle{ G_i=G\setminus\{e_1,e_2,\ldots,e_{i-1}\} }[/math] be the subgraph of [math]\displaystyle{ G }[/math] obtained from removing the first [math]\displaystyle{ (i-1) }[/math] edges from [math]\displaystyle{ G }[/math].
It holds that
[math]\displaystyle{ P_{G_i}(e_i)=\Pr[e_i\in X\mid \forall j\lt i,e_i\not\in X] }[/math]

This observation can be easily proved by the following "telescopic product" version of the chain rule. Note that with the notation [math]\displaystyle{ G_i }[/math] we have [math]\displaystyle{ G_1=G }[/math] and [math]\displaystyle{ G_{m+1}=\emptyset }[/math] to be the empty graph.

[math]\displaystyle{ \begin{align} \Pr[X=\emptyset] &= \frac{1}{Z(G)} = \frac{Z(G_{m+1})}{Z(G_1)} = \prod_{i=1}^m\frac{Z(G_{i+1})}{Z(G_i)}. \end{align} }[/math]

By definition of conditional probability, we have

[math]\displaystyle{ \frac{Z(G_{i+1})}{Z(G_i)}=\frac{|\{\text{matchings formed by edges }e_{i+1},e_{e+2},\ldots,e_{m}\}|}{|\{\text{matchings formed by edges }e_i,e_{i+1},\ldots,e_{m}\}|} =\Pr[e_i\not\in X\mid \forall j\lt i,e_j\not\in X]. }[/math]

On the other hand, it is also obvious that

[math]\displaystyle{ \frac{Z(G_{i+1})}{Z(G_i)}=\frac{|\{\text{matchings in }G_i\text{ that not contains }e_{i}\}|}{|\{\text{matchings in }G_i\}|} =1-P_{G_i}(e_i). }[/math]

So the above observation is proved.

From counting to sampling

With the above observation and recalling that [math]\displaystyle{ Z(G)=\frac{1}{\Pr[X=\emptyset]} }[/math], we have

[math]\displaystyle{ Z(G)=\frac{1}{\Pr[X=\emptyset]}=\prod_{i=1}^m\frac{1}{(1-P_{G_i}(e_i))}. }[/math]

The number of matchings in graph [math]\displaystyle{ G }[/math], [math]\displaystyle{ Z(G) }[/math], can be efficiently approximated if the marginal probabilities [math]\displaystyle{ P_{G_i}(e_i) }[/math] can be efficiently approximated, where the latter can be achieved by doing the statistical experiments by sampling uniform random matchings in graph [math]\displaystyle{ G_i }[/math].

This generic process is described in more details as follows:

  1. If we can sample a uniform (or almost uniform, whose meaning to be specified later) random matching from any graph [math]\displaystyle{ G }[/math] in polynomial time, then due to the estimator theorem, we can have an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of each [math]\displaystyle{ P_{G_i}(e_i) }[/math]. This estimator runs in time polynomial in [math]\displaystyle{ n=|V| }[/math], [math]\displaystyle{ \frac{1}{\epsilon} }[/math], and [math]\displaystyle{ \log\frac{1}{\delta} }[/math], and returns an estimation [math]\displaystyle{ \hat{P}_i }[/math] for each [math]\displaystyle{ P_{G_i}(e_i) }[/math] such that [math]\displaystyle{ |\hat{P}_i-P_{G_i}(e_i)|\gt \epsilon }[/math] holds with probability at most [math]\displaystyle{ \delta }[/math].
  2. The above [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimators are efficient approximations of marginal probabilities with small additive errors. To get an FPRAS for [math]\displaystyle{ Z(G) }[/math], whose error is multiplicative, we still need that each [math]\displaystyle{ P_{G_i}(e_i) }[/math] is bounded away from 1. i.e. there is a constant [math]\displaystyle{ c\gt 0 }[/math] such that it always holds that [math]\displaystyle{ P_{G_i}(e_i)\le 1-c }[/math]. For matching, this is always guaranteed. We have the following lemma.
Lemma
For any graph [math]\displaystyle{ G }[/math] and edge [math]\displaystyle{ e }[/math], it always holds that [math]\displaystyle{ P_G(e)\le \frac{1}{2} }[/math].
This lemma holds for quite obvious reasons: If [math]\displaystyle{ e }[/math] is an isolated edge (with both endpoints of degree 1), then it is covered by an uniformly random matching [math]\displaystyle{ X }[/math] with probability exactly 1/2. And more edges sharing vertices with [math]\displaystyle{ e }[/math] can not increase this probability.

With the above lemma, the task of designing a FPRAS for the number of matchings in an input graph is reduced to the task of uniformly sampling a matching in the input graph in polynomial time. In fact, precisely uniform sampling such complicated structure is quite difficult, but there are ways to almost uniform sampling matchings or other complicated structures in polynomial time. This is achieved by random walks in a very large but fast converging Markov chain.