随机算法 (Spring 2014)/Concentration of Measure and 随机算法 (Spring 2014)/The Monte Carlo Method: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Etone
 
Line 1: Line 1:
=The Bounded Difference Method=
= Parameter Estimation =
Combining Azuma's inequality with the construction of Doob martingales, we have the powerful ''Bounded Difference Method'' for concentration of measures.
Consider the following abstract problem of parameter estimation.


== For arbitrary random variables ==
Let <math>U</math> be a finite set of known size, and let <math>G\subseteq U</math>. We want to estimate the ''parameter'' <math>|G|</math>, i.e. the size of <math>G</math>.
Given a sequence of random variables <math>X_1,\ldots,X_n</math> and a function <math>f</math>. The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).


{{Theorem
We assume two devices:
|Theorem (Method of averaged bounded differences)|
* A '''uniform sampler''' <math>\mathcal{U}</math>, which uniformly and independently samples a member of <math>U</math> upon each calling.
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be arbitrary random variables and let <math>f</math> be a function of <math>X_0,X_1,\ldots, X_n</math> satisfying that, for all <math>1\le i\le n</math>,
* A '''membership oracle''' of <math>G</math>, denoted <math>\mathcal{O}</math>. Given as the input an <math>x\in U</math>, <math>\mathcal{O}(x)</math> indicates whether or not <math>x</math> is a member of <math>G</math>.
::<math>
|\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i,
</math>
:Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
{{Proof| Define the Doob Martingale sequence <math>Y_0,Y_1,\ldots,Y_n</math> by setting <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and, for <math>1\le i\le n</math>, <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i]</math>. Then the above theorem is a restatement of the Azuma's inequality holding for <math>Y_0,Y_1,\ldots,Y_n</math>.
}}


== For independent random variables ==
Equipped by <math>\mathcal{U}</math> and  <math>\mathcal{O}</math>, we can have the following Monte Carlo algorithm:
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.
*Choose <math>N</math> independent samples from <math>U</math> by the uniform sampler <math>\mathcal{U}</math>, represented by the random variables <math>X_1,X_2,\ldots, X_N</math>.
* Let <math>Y_i</math> be the indicator random variable defined as <math>Y_i=\mathcal{O}(X_i)</math>, namely, <math>Y_i</math> indicates whether <math>X_i\in G</math>.
* Define the estimator random variable
::<math>Z=\frac{|U|}{N}\sum_{i=1}^N Y_i.</math>


{{Theorem
It is easy to see that <math>\mathbf{E}[Z]=|G|</math> and we might hope that with high probability the value of <math>Z</math> is close to <math>|G|</math>. Formally, <math>Z</math> is called an '''<math>\epsilon</math>-approximation''' of <math>|G|</math> if
|Definition (Lipschitz condition)|
:<math>
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
(1-\epsilon)|G|\le Z\le (1+\epsilon)|G|.
::<math>\begin{align}
</math>
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1.
\end{align}</math>
}}
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.


The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between <math>|G|</math> and <math>|U|</math>
{{Theorem
|Definition (Lipschitz condition, general version)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i.
\end{align}</math>
}}


The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
{{Theorem
{{Theorem
|Corollary (Method of bounded differences)|
|Theorem (estimator theorem)|
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be <math>n</math> '''independent''' random variables and let <math>f</math> be a function satisfying the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>. Then
:Let <math>\alpha=\frac{|G|}{|U|}</math>. Then the Monte Carlo method yields an <math>\epsilon</math>-approximation to <math>|G|</math> with probability at least <math>1-\delta</math> provided
::<math>\begin{align}
::<math>N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta}</math>.
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
}}
{{Proof|
Recall that <math>Y_i</math> indicates whether the <math>i</math>-th sample is in <math>G</math>. Let <math>Y=\sum_{i=1}^NY_i</math>. Then we have <math>Z=\frac{|U|}{N}Y</math>, and hence the event <math>(1-\epsilon)|G|\le Z\le (1+\epsilon)|G|</math> is equivalent to that <math>(1-\epsilon)\frac{|G|}{|U|}N\le Y\le (1+\epsilon)\frac{|G|}{|U|}N</math>. Note that each <math>Y_i</math> is a Bernoulli trial that succeeds with probability <math>\frac{|G|}{|U|}</math>, thus <math>\mathbb{E}[Y]=\frac{|G|}{|U|}N</math>. Then the rest is due to Chernoff bound.}}


{{Proof| For convenience, we denote that <math>\boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j)</math> for any <math>1\le i\le j\le n</math>.
A counting algorithm for the set <math>G</math> has to deal with the following three issues:
# Implement the membership oracle <math>\mathcal{O}</math>. This is usually straightforward, or assumed by the model.
# Implement the uniform sampler <math>\mathcal{U}</math>. This can be straightforward or highly nontrivial, depending on the problem.
# Deal with exponentially small <math>\alpha=\frac{|G|}{|U|}</math>. This requires us to cleverly choose the universe <math>U</math>. Sometimes this needs some nontrivial ideas.


We first show that the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, implies another condition called the averaged Lipschitz condition (ALC): for any <math>a_i,b_i</math>, <math>1\le i\le n</math>,
= Counting DNFs =
:<math>
A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i.
:<math>(x_1\wedge \overline{x_2}\wedge x_3)\vee(x_2\wedge x_4)\vee(\overline{x_1}\wedge x_3\wedge x_4)</math>.
</math>
Note the difference from the conjunctive normal forms (CNF).
And this condition implies the averaged bounded difference condition: for all <math>1\le i\le n</math>,
::<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i.
</math>
Then by applying the method of averaged bounded differences, the corollary can be proved.


For any <math>a</math>, by the law of total expectation,
Given a DNF formular <math>\phi</math> as the input, the problem is to count the number of satisfying assignments of <math>\phi</math>. This problem is [http://en.wikipedia.org/wiki/Sharp-P-complete '''#P-complete'''].
:<math>
\begin{align}
&\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\
&= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right].
\end{align}
</math>


Let <math>a=a_i</math> and <math>b_i</math>, and take the diference. Then
Naively applying the Monte Carlo method will not give a good answer. Suppose that there are <math>n</math> variables. Let <math>U=\{\mathrm{true},\mathrm{false}\}^n</math> be the set of all truth assignments of the <math>n</math> variables. Let <math>G=\{x\in U\mid \phi(x)=\mathrm{true}\}</math> be the set of satisfying assignments for <math>\phi</math>. The straightforward use of Monte Carlo method samples <math>N</math> assignments from <math>U</math> and check how many of them satisfy <math>\phi</math>. This algorithm fails when <math>|G|/|U|</math> is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.
:<math>
\begin{align}
&\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\
&=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\
&\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\
&\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\
&=c_i.
\end{align}
</math>


Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.


By the law of total expectation,
;The union of sets problem
:<math>
We reformulate the DNF counting problem in a more abstract framework, called the '''union of sets''' problem.  
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}].
</math>


We can trivially write <math>\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]</math> as
Let <math>V</math> be a finite universe. We are given <math>m</math> subsets <math>H_1,H_2,\ldots,H_m\subseteq V</math>. The following assumptions hold:
:<math>
*For all <math>i</math>, <math>|H_i|</math> is computable in poly-time.
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right].
*It is possible to sample uniformly from each individual <math>H_i</math>.
</math>
*For any <math>x\in V</math>, it can be determined in poly-time whether <math>x\in H_i</math>.


Hence, the difference is
The goal is to compute the size of <math>H=\bigcup_{i=1}^m H_i</math>.
:<math>
\begin{align}
&\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\
&=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\
&\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\
&\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\
&=c_i.
\end{align}
</math>


The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula <math>\phi</math> is defined on <math>n</math> variables, and <math>\phi</math> contains <math>m</math> clauses <math>C_1,C_2,\ldots,C_m</math>, where clause <math>C_i</math> has <math>k_i</math> literals. Without loss of generality, we assume that in each clause, each variable appears at most once.
}}
* <math>V</math> is the set of all assignments.
*Each <math>H_i</math> is the set of satisfying assignments for the <math>i</math>-th clause <math>C_i</math> of the DNF formular <math>\phi</math>. Then the union of sets <math>H=\bigcup_i H_i</math> gives the set of satisfying assignments for <math>\phi</math>.
* Each clause <math>C_i</math> is a conjunction (AND) of literals. It is not hard to see that <math>|H_i|=2^{n-k_i}</math>, which is efficiently computable.
* Sampling from an <math>H_i</math> is simple: we just fix the assignments of the <math>k_i</math> literals of that clause, and sample uniformly and independently the rest <math>(n-k_i)</math> variable assignments.
* For each assignment <math>x</math>, it is easy to check whether it satisfies a clause <math>C_i</math>, thus it is easy to determine whether <math>x\in H_i</math>.


== Applications ==
==The coverage algorithm==
We now introduce the coverage algorithm for the union of sets problem.


=== Occupancy problem ===
Consider the multiset <math>U</math> defined by
Throwing <math>m</math> balls uniformly and independently at random to <math>n</math> bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.
:<math>U=H_1\uplus H_2\uplus\cdots \uplus H_m</math>,
where <math>\uplus</math> denotes the multiset union. It is more convenient to define <math>U</math> as the set
:<math>U=\{(x,i)\mid x\in H_i\}</math>.
For each <math>x\in H</math>, there may be more than one instances of <math>(x,i)\in U</math>. We can choose a unique representative among the multiple instances <math>(x,i)\in U</math> for the same <math>x\in H</math>, by choosing the <math>(x,i)</math> with the minimum <math>i</math>, and form a set <math>G</math>.


This problem can be described equivalently as follows. Let <math>f:[m]\rightarrow[n]</math> be a uniform random function from <math>[m]\rightarrow[n]</math>. We ask for the number of <math>i\in[n]</math> that <math>f^{-1}(i)</math> is empty.
Formally, <math>G=\{(x,i)\in U\mid \forall (x,j)\in U, j\le i\}</math>. Every <math>x\in H</math> corresponds to a unique <math>(x,i)\in G</math> where <math>i</math> is the smallest among <math>x\in H_i</math>.


For any <math>i\in[n]</math>, let <math>X_i</math> indicate the emptiness of bin <math>i</math>. Let <math>X=\sum_{i=1}^nX_i</math> be the number of empty bins.
It is obvious that <math>G\subseteq U</math> and
:<math>
:<math>|G|=|H|</math>.
\mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m.
</math>
By the linearity of expectation,
:<math>
\mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m.
</math>


We want to know how <math>X</math> deviates from this expectation. The complication here is that <math>X_i</math> are not independent. So we alternatively look at a sequence of independent random variables <math>Y_1,\ldots, Y_m</math>, where <math>Y_j\in[n]</math> represents the bin into which the <math>j</math>th ball falls. Clearly <math>X</math> is function of <math>Y_1,\ldots, Y_m</math>.
Therefore, estimation of <math>|H|</math> is reduced to estimation of <math>|G|</math> with <math>G\subseteq U</math>. Then <math>|G|</math> can have an <math>\epsilon</math>-approximation with probability <math>(1-\delta)</math> in poly-time, if we can uniformly sample from <math>U</math> and <math>|G|/|U|</math> is suitably small.


We than observe that changing the value of any <math>Y_i</math> can change the value of <math>X</math> by at most 1, because one ball can affect the emptiness of at most one bin.
An uniform sample from <math>U</math> can be implemented as follows:
Thus as a function of independent random variables <math>Y_1,\ldots, Y_m</math>, <math>X</math> satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that
* generate an <math>i\in\{1,2,\ldots,m\}</math> with probability <math>\frac{|H_i|}{\sum_{i=1}^m|H_i|}</math>;
:<math>
* uniformly sample an <math>x\in H_i</math>, and return <math>(x,i)</math>.
\Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2}
</math>


Thus, for sufficiently large <math>n</math> and <math>m</math>, the number of empty bins is tightly concentrated around <math>n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}}</math>
It is easy to see that this gives a uniform member of <math>U</math>. The above sampling procedure is poly-time because each <math>|H_i|</math> can be computed in poly-time, and sampling uniformly from each <math>H_i</math> is poly-time.


=== Pattern Matching ===
We now only need to lower bound the ratio
Let <math>\boldsymbol{X}=(X_1,\ldots,X_n)</math> be a sequence of characters chosen independently and uniformly at random from an alphabet <math>\Sigma</math>, where <math>m=|\Sigma|</math>. Let <math>\pi\in\Sigma^k</math> be an arbitrarily fixed string of <math>k</math> characters from <math>\Sigma</math>, called a ''pattern''. Let <math>Y</math> be the number of occurrences of the pattern <math>\pi</math> as a substring of the random string <math>X</math>.
:<math>\alpha=\frac{|G|}{|U|}</math>.


By the linearity of expectation, it is obvious that
We claim that  
:<math>
:<math>\alpha\ge\frac{1}{m}</math>.
\mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k.
It is easy to see this, because each <math>x\in H</math> has at most <math>m</math> instances of <math>(x,i)</math> in <math>U</math>, and we already know that <math>|G|=|H|</math>.
</math>


We now look at the concentration of <math>Y</math>. The complication again lies in the dependencies between the matches. Yet we will see that <math>Y</math> is well tightly concentrated around its expectation if <math>k</math> is relatively small compared to <math>n</math>.
Due to the estimator theorem, this needs <math>\frac{4m}{\epsilon^2}\ln\frac{2}{\delta}</math> uniform random samples from <math>U</math>.


For a fixed pattern <math>\pi</math>, the random variable <math>Y</math> is a function of the independent random variables <math>(X_1,\ldots,X_n)</math>. Any character <math>X_i</math> participates in no more than <math>k</math> matches, thus changing the value of any <math>X_i</math> can affect the value of <math>Y</math> by at most <math>k</math>. <math>Y</math> satisfies the Lipschitz condition with constant <math>k</math>. Apply the method of bounded differences,
This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.
:<math>
\Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge  tk\sqrt{n}\right]\le 2e^{-t^2/2}
</math>
 
=== Combining unit vectors ===
Let <math>u_1,\ldots,u_n</math> be <math>n</math> unit vectors from some normed space. That is, <math>\|u_i\|=1</math> for any <math>1\le i\le n</math>, where <math>\|\cdot\|</math> denote the vector norm (e.g. <math>\ell_1,\ell_2,\ell_\infty</math>) of the space.
 
Let <math>\epsilon_1,\ldots,\epsilon_n\in\{-1,+1\}</math> be independently chosen and <math>\Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2</math>.
 
Let
:<math>v=\epsilon_1u_1+\cdots+\epsilon_nu_n,
</math>
and
:<math>
X=\|v\|.
</math>
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable <math>X</math> is well concentrated around its mean.
 
<math>X</math> is a function of independent random variables <math>\epsilon_1,\ldots,\epsilon_n</math>.
By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector <math>u_i</math> can only change the value of <math>X</math> for at most 2, thus <math>X</math> satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:
:<math>
\Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}.
</math>

Revision as of 03:05, 12 May 2014

Parameter Estimation

Consider the following abstract problem of parameter estimation.

Let [math]\displaystyle{ U }[/math] be a finite set of known size, and let [math]\displaystyle{ G\subseteq U }[/math]. We want to estimate the parameter [math]\displaystyle{ |G| }[/math], i.e. the size of [math]\displaystyle{ G }[/math].

We assume two devices:

  • A uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], which uniformly and independently samples a member of [math]\displaystyle{ U }[/math] upon each calling.
  • A membership oracle of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \mathcal{O} }[/math]. Given as the input an [math]\displaystyle{ x\in U }[/math], [math]\displaystyle{ \mathcal{O}(x) }[/math] indicates whether or not [math]\displaystyle{ x }[/math] is a member of [math]\displaystyle{ G }[/math].

Equipped by [math]\displaystyle{ \mathcal{U} }[/math] and [math]\displaystyle{ \mathcal{O} }[/math], we can have the following Monte Carlo algorithm:

  • Choose [math]\displaystyle{ N }[/math] independent samples from [math]\displaystyle{ U }[/math] by the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math], represented by the random variables [math]\displaystyle{ X_1,X_2,\ldots, X_N }[/math].
  • Let [math]\displaystyle{ Y_i }[/math] be the indicator random variable defined as [math]\displaystyle{ Y_i=\mathcal{O}(X_i) }[/math], namely, [math]\displaystyle{ Y_i }[/math] indicates whether [math]\displaystyle{ X_i\in G }[/math].
  • Define the estimator random variable
[math]\displaystyle{ Z=\frac{|U|}{N}\sum_{i=1}^N Y_i. }[/math]

It is easy to see that [math]\displaystyle{ \mathbf{E}[Z]=|G| }[/math] and we might hope that with high probability the value of [math]\displaystyle{ Z }[/math] is close to [math]\displaystyle{ |G| }[/math]. Formally, [math]\displaystyle{ Z }[/math] is called an [math]\displaystyle{ \epsilon }[/math]-approximation of [math]\displaystyle{ |G| }[/math] if

[math]\displaystyle{ (1-\epsilon)|G|\le Z\le (1+\epsilon)|G|. }[/math]

The following theorem states that the probabilistic accuracy of the estimation depends on the number of samples and the ratio between [math]\displaystyle{ |G| }[/math] and [math]\displaystyle{ |U| }[/math]

Theorem (estimator theorem)
Let [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. Then the Monte Carlo method yields an [math]\displaystyle{ \epsilon }[/math]-approximation to [math]\displaystyle{ |G| }[/math] with probability at least [math]\displaystyle{ 1-\delta }[/math] provided
[math]\displaystyle{ N\ge\frac{4}{\epsilon^2 \alpha}\ln\frac{2}{\delta} }[/math].
Proof.

Recall that [math]\displaystyle{ Y_i }[/math] indicates whether the [math]\displaystyle{ i }[/math]-th sample is in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ Y=\sum_{i=1}^NY_i }[/math]. Then we have [math]\displaystyle{ Z=\frac{|U|}{N}Y }[/math], and hence the event [math]\displaystyle{ (1-\epsilon)|G|\le Z\le (1+\epsilon)|G| }[/math] is equivalent to that [math]\displaystyle{ (1-\epsilon)\frac{|G|}{|U|}N\le Y\le (1+\epsilon)\frac{|G|}{|U|}N }[/math]. Note that each [math]\displaystyle{ Y_i }[/math] is a Bernoulli trial that succeeds with probability [math]\displaystyle{ \frac{|G|}{|U|} }[/math], thus [math]\displaystyle{ \mathbb{E}[Y]=\frac{|G|}{|U|}N }[/math]. Then the rest is due to Chernoff bound.

[math]\displaystyle{ \square }[/math]

A counting algorithm for the set [math]\displaystyle{ G }[/math] has to deal with the following three issues:

  1. Implement the membership oracle [math]\displaystyle{ \mathcal{O} }[/math]. This is usually straightforward, or assumed by the model.
  2. Implement the uniform sampler [math]\displaystyle{ \mathcal{U} }[/math]. This can be straightforward or highly nontrivial, depending on the problem.
  3. Deal with exponentially small [math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math]. This requires us to cleverly choose the universe [math]\displaystyle{ U }[/math]. Sometimes this needs some nontrivial ideas.

Counting DNFs

A disjunctive normal form (DNF) formular is a disjunction (OR) of clauses, where each clause is a conjunction (AND) of literals. For example:

[math]\displaystyle{ (x_1\wedge \overline{x_2}\wedge x_3)\vee(x_2\wedge x_4)\vee(\overline{x_1}\wedge x_3\wedge x_4) }[/math].

Note the difference from the conjunctive normal forms (CNF).

Given a DNF formular [math]\displaystyle{ \phi }[/math] as the input, the problem is to count the number of satisfying assignments of [math]\displaystyle{ \phi }[/math]. This problem is #P-complete.

Naively applying the Monte Carlo method will not give a good answer. Suppose that there are [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ U=\{\mathrm{true},\mathrm{false}\}^n }[/math] be the set of all truth assignments of the [math]\displaystyle{ n }[/math] variables. Let [math]\displaystyle{ G=\{x\in U\mid \phi(x)=\mathrm{true}\} }[/math] be the set of satisfying assignments for [math]\displaystyle{ \phi }[/math]. The straightforward use of Monte Carlo method samples [math]\displaystyle{ N }[/math] assignments from [math]\displaystyle{ U }[/math] and check how many of them satisfy [math]\displaystyle{ \phi }[/math]. This algorithm fails when [math]\displaystyle{ |G|/|U| }[/math] is exponentially small, namely, when exponentially small fraction of the assignments satisfy the input DNF formula.


The union of sets problem

We reformulate the DNF counting problem in a more abstract framework, called the union of sets problem.

Let [math]\displaystyle{ V }[/math] be a finite universe. We are given [math]\displaystyle{ m }[/math] subsets [math]\displaystyle{ H_1,H_2,\ldots,H_m\subseteq V }[/math]. The following assumptions hold:

  • For all [math]\displaystyle{ i }[/math], [math]\displaystyle{ |H_i| }[/math] is computable in poly-time.
  • It is possible to sample uniformly from each individual [math]\displaystyle{ H_i }[/math].
  • For any [math]\displaystyle{ x\in V }[/math], it can be determined in poly-time whether [math]\displaystyle{ x\in H_i }[/math].

The goal is to compute the size of [math]\displaystyle{ H=\bigcup_{i=1}^m H_i }[/math].

DNF counting can be interpreted in this general framework as follows. Suppose that the DNF formula [math]\displaystyle{ \phi }[/math] is defined on [math]\displaystyle{ n }[/math] variables, and [math]\displaystyle{ \phi }[/math] contains [math]\displaystyle{ m }[/math] clauses [math]\displaystyle{ C_1,C_2,\ldots,C_m }[/math], where clause [math]\displaystyle{ C_i }[/math] has [math]\displaystyle{ k_i }[/math] literals. Without loss of generality, we assume that in each clause, each variable appears at most once.

  • [math]\displaystyle{ V }[/math] is the set of all assignments.
  • Each [math]\displaystyle{ H_i }[/math] is the set of satisfying assignments for the [math]\displaystyle{ i }[/math]-th clause [math]\displaystyle{ C_i }[/math] of the DNF formular [math]\displaystyle{ \phi }[/math]. Then the union of sets [math]\displaystyle{ H=\bigcup_i H_i }[/math] gives the set of satisfying assignments for [math]\displaystyle{ \phi }[/math].
  • Each clause [math]\displaystyle{ C_i }[/math] is a conjunction (AND) of literals. It is not hard to see that [math]\displaystyle{ |H_i|=2^{n-k_i} }[/math], which is efficiently computable.
  • Sampling from an [math]\displaystyle{ H_i }[/math] is simple: we just fix the assignments of the [math]\displaystyle{ k_i }[/math] literals of that clause, and sample uniformly and independently the rest [math]\displaystyle{ (n-k_i) }[/math] variable assignments.
  • For each assignment [math]\displaystyle{ x }[/math], it is easy to check whether it satisfies a clause [math]\displaystyle{ C_i }[/math], thus it is easy to determine whether [math]\displaystyle{ x\in H_i }[/math].

The coverage algorithm

We now introduce the coverage algorithm for the union of sets problem.

Consider the multiset [math]\displaystyle{ U }[/math] defined by

[math]\displaystyle{ U=H_1\uplus H_2\uplus\cdots \uplus H_m }[/math],

where [math]\displaystyle{ \uplus }[/math] denotes the multiset union. It is more convenient to define [math]\displaystyle{ U }[/math] as the set

[math]\displaystyle{ U=\{(x,i)\mid x\in H_i\} }[/math].

For each [math]\displaystyle{ x\in H }[/math], there may be more than one instances of [math]\displaystyle{ (x,i)\in U }[/math]. We can choose a unique representative among the multiple instances [math]\displaystyle{ (x,i)\in U }[/math] for the same [math]\displaystyle{ x\in H }[/math], by choosing the [math]\displaystyle{ (x,i) }[/math] with the minimum [math]\displaystyle{ i }[/math], and form a set [math]\displaystyle{ G }[/math].

Formally, [math]\displaystyle{ G=\{(x,i)\in U\mid \forall (x,j)\in U, j\le i\} }[/math]. Every [math]\displaystyle{ x\in H }[/math] corresponds to a unique [math]\displaystyle{ (x,i)\in G }[/math] where [math]\displaystyle{ i }[/math] is the smallest among [math]\displaystyle{ x\in H_i }[/math].

It is obvious that [math]\displaystyle{ G\subseteq U }[/math] and

[math]\displaystyle{ |G|=|H| }[/math].

Therefore, estimation of [math]\displaystyle{ |H| }[/math] is reduced to estimation of [math]\displaystyle{ |G| }[/math] with [math]\displaystyle{ G\subseteq U }[/math]. Then [math]\displaystyle{ |G| }[/math] can have an [math]\displaystyle{ \epsilon }[/math]-approximation with probability [math]\displaystyle{ (1-\delta) }[/math] in poly-time, if we can uniformly sample from [math]\displaystyle{ U }[/math] and [math]\displaystyle{ |G|/|U| }[/math] is suitably small.

An uniform sample from [math]\displaystyle{ U }[/math] can be implemented as follows:

  • generate an [math]\displaystyle{ i\in\{1,2,\ldots,m\} }[/math] with probability [math]\displaystyle{ \frac{|H_i|}{\sum_{i=1}^m|H_i|} }[/math];
  • uniformly sample an [math]\displaystyle{ x\in H_i }[/math], and return [math]\displaystyle{ (x,i) }[/math].

It is easy to see that this gives a uniform member of [math]\displaystyle{ U }[/math]. The above sampling procedure is poly-time because each [math]\displaystyle{ |H_i| }[/math] can be computed in poly-time, and sampling uniformly from each [math]\displaystyle{ H_i }[/math] is poly-time.

We now only need to lower bound the ratio

[math]\displaystyle{ \alpha=\frac{|G|}{|U|} }[/math].

We claim that

[math]\displaystyle{ \alpha\ge\frac{1}{m} }[/math].

It is easy to see this, because each [math]\displaystyle{ x\in H }[/math] has at most [math]\displaystyle{ m }[/math] instances of [math]\displaystyle{ (x,i) }[/math] in [math]\displaystyle{ U }[/math], and we already know that [math]\displaystyle{ |G|=|H| }[/math].

Due to the estimator theorem, this needs [math]\displaystyle{ \frac{4m}{\epsilon^2}\ln\frac{2}{\delta} }[/math] uniform random samples from [math]\displaystyle{ U }[/math].

This gives the coverage algorithm for the abstract problem of the union of sets. The DNF counting is a special case of it.