高级算法 (Fall 2021)/Limited independence and 高级算法 (Fall 2021)/Problem Set 2: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>TCSseminar
(Created page with "*每道题目的解答都要有<font color="red" size=5>完整的解题过程</font>。中英文不限。 == Problem 1 == Fix a universe <math>U</math> and two subset <math>A...")
 
Line 1: Line 1:
= <math>k</math>-wise  independence =
*每道题目的解答都要有<font color="red" size=5>完整的解题过程</font>。中英文不限。
Recall the definition of independence between events:
{{Theorem
|Definition (Independent events)|
:Events <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> are '''mutually independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]
&=
\prod_{i\in I}\Pr[\mathcal{E}_i].
\end{align}</math>
}}
Similarly, we can define independence between random variables:
{{Theorem
|Definition (Independent variables)|
:Random variables <math>X_1, X_2, \ldots, X_n</math> are '''mutually independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> and any values <math>x_i</math>, where <math>i\in I</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}


Mutual independence is an ideal condition of independence. The limited notion of independence is usually defined by the '''k-wise independence'''.
== Problem 1 ==
{{Theorem
Fix a universe <math>U</math> and two subset <math>A,B \subseteq U</math>, both with size <math>n</math>. we create both Bloom filters <math>F_A</math>(<math>F_B</math>) for <math>A</math> (<math>B</math>), using the same number of bits <math> m</math> and the same <math>k</math> hash functions.
|Definition (k-wise Independenc)|
*Let <math>F_C = F_A \land F_B</math> be the Bloom filter formed by computing the bitwise AND of <math>F_A</math> and <math>F_B</math>. Argue that <math>F_C</math> may not always be the same as the Bloom filter that are created for <math>A\cap B </math>.
:1. Events <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> are '''k-wise independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> with <math>|I|\le k</math>
*Bloom filters can be used to estimate set differences. Express the expected number of bits where <math>F_A</math> and <math>F_B</math> differ as a function of <math>m, n, k</math> and <math>|A\cap B|</math>.
:::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]
&=
\prod_{i\in I}\Pr[\mathcal{E}_i].
\end{align}</math>
:2. Random variables <math>X_1, X_2, \ldots, X_n</math> are '''k-wise independent''' if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> with <math>|I|\le k</math> and any values <math>x_i</math>, where <math>i\in I</math>,
:::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}


A very common case is pairwise independence, i.e. the 2-wise independence.
== Problem 2 ==
{{Theorem
In Balls-and-Bins model, we throw <math>m</math> balls independently and uniformly at random into <math>n</math> bins. We know that the maximum load is <math>\Theta\left(\frac{\log n}{\log\log n}\right)</math> with high probability when <math>m=\Theta(n)</math>.
|Definition (pairwise Independent random variables)|
The two-choice paradigm is another way to throw <math>m</math> balls into <math>n</math> bins: each ball is thrown into the least loaded of two bins chosen independently and uniformly at random(it could be the case that the two chosen bins are exactly the same, and then the ball will be thrown into that bin), and breaks the tie arbitrarily. When <math>m=\Theta(n)</math>, the maximum load of two-choice paradigm is known to be <math>\Theta(\log\log n)</math> with high probability, which is exponentially less than the maxim load when there is only one random choice. This phenomenon is called '''''the power of two choices'''''.
:Random variables <math>X_1, X_2, \ldots, X_n</math> are '''pairwise independent''' if, for any <math>X_i,X_j</math> where <math>i\neq j</math> and any values <math>a,b</math>
:::<math>\begin{align}
\Pr\left[X_i=a\wedge X_j=b\right]
&=
\Pr[X_i=a]\cdot\Pr[X_j=b].
\end{align}</math>
}}


Note that the definition of k-wise independence is hereditary:
Here are the questions:
* If <math>X_1, X_2, \ldots, X_n</math> are k-wise independent, then they are also <math>\ell</math>-wise independent for any <math>\ell<k</math>.
*Consider the following paradigm: we throw <math>n</math> balls into <math>n</math> bins. The first <math>\frac{n}{2}</math> balls are thrown into bins independently and uniformly at random. The remaining <math>\frac{n}{2}</math> balls are thrown into bins using the two-choice paradigm. What is the maximum load with high probability? You need to give an asymptotically tight bound (in the form of <math>\Theta(\cdot)</math>).
* If <math>X_1, X_2, \ldots, X_n</math> are NOT k-wise independent, then they cannot be <math>\ell</math>-wise independent for any <math>\ell>k</math>.


== Pairwise Independent Bits ==
*Replace the above paradigm to the following: the first <math>\frac{n}{2}</math> balls are thrown into bins using the  two-choice paradigm while the remaining <math>\frac{n}{2}</math> balls are thrown into bins independently and uniformly at random.  What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.
Suppose we have <math>m</math> mutually independent and uniform random bits <math>X_1,\ldots, X_m</math>. We are going to extract <math>n=2^m-1</math> pairwise independent bits from these <math>m</math> mutually independent bits.


Enumerate all the nonempty subsets of <math>\{1,2,\ldots,m\}</math> in some order. Let <math>S_j</math>  be the <math>j</math>th subset. Let
*Replace the above paradigm to the following: assume all <math>n</math> balls are thrown in a sequence. For every <math>1\le i\le n</math>, if <math>i</math> is odd, we throw <math>i</math>-th ball into bins independently and uniformly at random, otherwise, we throw it into bins using the two-choice paradigm. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.
:<math>
Y_j=\bigoplus_{i\in S_j} X_i,
</math>
where <math>\oplus</math> is the exclusive-or, whose truth table is as follows.
:{|cellpadding="4" border="1"
|-
|<math>a</math>
|<math>b</math>
|<math>a</math><math>\oplus</math><math>b</math>
|-
| 0 || 0 ||align="center"| 0
|-
| 0 || 1 ||align="center"| 1
|-
| 1 || 0 ||align="center"| 1
|-
| 1 || 1 ||align="center"| 0
|}
 
There are <math>n=2^m-1</math> such <math>Y_j</math>, because there are <math>2^m-1</math> nonempty subsets of <math>\{1,2,\ldots,m\}</math>. An equivalent definition of <math>Y_j</math> is
:<math>Y_j=\left(\sum_{i\in S_j}X_i\right)\bmod 2</math>.
Sometimes, <math>Y_j</math> is called the '''parity''' of the bits in <math>S_j</math>.
 
We claim that <math>Y_j</math> are pairwise independent and uniform.
 
{{Theorem
|Theorem|
:For any <math>Y_j</math> and any <math>b\in\{0,1\}</math>,
::<math>\begin{align}
\Pr\left[Y_j=b\right]
&=
\frac{1}{2}.
\end{align}</math>
:For any <math>Y_j,Y_\ell</math> that <math>j\neq\ell</math> and any <math>a,b\in\{0,1\}</math>,
::<math>\begin{align}
\Pr\left[Y_j=a\wedge Y_\ell=b\right]
&=
\frac{1}{4}.
\end{align}</math>
}}
 
The proof is left for your exercise.


Therefore, we extract exponentially many pairwise independent uniform random bits from a sequence of mutually independent uniform random bits.
== Problem 3 ==
Let <math>X</math> be a random variable with expectation <math>0</math> such that moment generating function <math>\mathbf{E}[\exp(t|X|)]</math> is finite for some <math> t > 0 </math>. We can use the following two kinds of tail inequalities for <math> X </math>.


Note that <math>Y_j</math> are not 3-wise independent. For example, consider the subsets <math>S_1=\{1\},S_2=\{2\},S_3=\{1,2\}</math> and the corresponding random bits <math>Y_1,Y_2,Y_3</math>. Any two of <math>Y_1,Y_2,Y_3</math> would decide the value of the third one.
'''''Chernoff Bound'''''
 
==  Pairwise Independent Variables ==
We now consider constructing pairwise independent random variables ranging over <math>[p]=\{0,1,2,\ldots,p-1\}</math> for some prime <math>p</math>. Unlike the above construction, now we only need two independent random sources <math>X_0,X_1</math>, which are uniformly and independently distributed over <math>[p]</math>.
 
Let <math>Y_0,Y_1,\ldots, Y_{p-1}</math> be defined as:
:<math>
:<math>
\begin{align}
\begin{align}
Y_i=(X_0+i\cdot X_1)\bmod p &\quad \mbox{for }i\in[p].
\mathbf{Pr}[|X| \geq \delta] \leq \min_{t \geq 0} \frac{\mathbf{E}[e^{t|X|}]}{e^{t\delta}}
\end{align}
\end{align}
</math>
</math>


{{Theorem
'''''<math>k</math>th-Moment Bound'''''
|Theorem|
: The random variables <math>Y_0,Y_1,\ldots, Y_{p-1}</math> are pairwise independent uniform random variables over <math>[p]</math>.
}}
{{Proof| We first show that <math>Y_i</math> are uniform. That is, we will show that for any <math>i,a\in[p]</math>,
:<math>\begin{align}
\Pr\left[(X_0+i\cdot X_1)\bmod p=a\right]
&=
\frac{1}{p}.
\end{align}</math>
Due to the law of total probability,
:<math>\begin{align}
\Pr\left[(X_0+i\cdot X_1)\bmod p=a\right]
&=
\sum_{j\in[p]}\Pr[X_1=j]\cdot\Pr\left[(X_0+ij)\bmod p=a\right]\\
&=\frac{1}{p}\sum_{j\in[p]}\Pr\left[X_0\equiv(a-ij)\pmod{p}\right].
\end{align}</math>
For prime <math>p</math>, for any <math>i,j,a\in[p]</math>, there is exact one value in <math>[p]</math> of <math>X_0</math> satisfying <math>X_0\equiv(a-ij)\pmod{p}</math>. Thus, <math>\Pr\left[X_0\equiv(a-ij)\pmod{p}\right]=1/p</math> and the above probability is <math>\frac{1}{p}</math>.
 
We then show that <math>Y_i</math> are pairwise independent, i.e. we will show that for any <math>Y_i,Y_j</math> that <math>i\neq j</math> and any <math>a,b\in[p]</math>,
:<math>\begin{align}
\Pr\left[Y_i=a\wedge Y_j=b\right]
&=
\frac{1}{p^2}.
\end{align}</math>
 
The event <math>Y_i=a\wedge Y_j=b</math> is equivalent to that
:<math>
:<math>
\begin{cases}
\begin{align}
(X_0+iX_1)\equiv a\pmod{p}\\
\mathbf{Pr}[|X| \geq \delta] \leq \frac{\mathbf{E}[|X|^k]}{\delta^k}
(X_0+jX_1)\equiv b\pmod{p}
\end{align}
\end{cases}
</math>
</math>
Due to the [http://en.wikipedia.org/wiki/Chinese_remainder_theorem Chinese remainder theorem], there exists a unique solution of <math>X_0</math> and <math>X_1</math> in <math>[p]</math> to the above linear congruential system. Thus the probability of the event is <math>\frac{1}{p^2}</math>.
}}


==Application: Derandomizing MAX-CUT==
* Show that for each <math>\delta</math>, there exists a choice of <math>k</math> such that the <math>k</math>th-moment bound is stronger than the Chernoff bound.
Let <math>G(V,E)</math> be an undirected graph, and <math>S\subset V</math> be a vertex set. The '''cut''' defined by <math>S</math> is <math>C(S,\bar{S})=|\{uv\in E\mid u\in S, v\not\in S\}|</math>.  
 
  '''''Hint''''': Consider the Taylor expansion of the moment generating function and apply the probabilistic method.


Given as input an undirected graph <math>G(V,E)</math>, find the <math>S\subset V</math> whose cut value <math>C(S,\bar{S})</math> is maximized. This problem is called the [http://en.wikipedia.org/wiki/Maximum_cut maximum cut (MAX-CUT) problem], which is NP-hard. The decision version of one of the weighted version of the problem is one of the [http://en.wikipedia.org/wiki/Karp%27s_21_NP-complete_problems Karp's 21 NP-complete problems]. The problem has a <math>0.878</math>-approximation algorithm by rounding a [http://en.wikipedia.org/wiki/Semidefinite_programming semidefinite programming]. Assuming that the [http://en.wikipedia.org/wiki/Unique_games_conjecture unique game conjecture (UGC)], there does not exist a poly-time algorithm with better approximation ratio unless <math>P=NP</math>.
* Why would we still prefer the Chernoff bound to the (seemingly) stronger <math>k</math>-th moment bound?


Here we give a very simple <math>0.5</math>-approximation algorithm. The "algorithm" has a one-line description:
== Problem 4 ==
*Put each <math>v\in V</math> into <math>S</math> independently with probability 1/2.
We then analyze the approximation ratio of this algorithm.


For each <math>v\in V</math>, let <math>Y_v</math> indicate whether <math>v\in S</math>, that is
In this problem, we will explore the idea of negative association, show that the classical Chernoff bounds also hold for sum of negatively associated random variables, and see negative association in action by considering occupancy numbers in the balls and bins model.
:<math>Y_v=\begin{cases}1& v\in S,\\
Let <math>\boldsymbol{X}=(X_1,\cdots,X_n)</math> be a vector of random variables. We say random variables <math>\boldsymbol{X}</math> are ''negatively associated'' if for all disjoint subsets <math>I,J\subseteq[n]</math>,
0& v\not\in S.\end{cases}</math>
For each edge <math>uv\in E</math>, let <math>Y_{uv}</math> indicate whether <math>uv</math> contribute to the cut <math>C(S,\bar{S})</math>, i.e. whether <math>u\in S, v\not\in S</math> or <math>u\not\in S, v\in S</math>, that is
:<math>Y_{uv}=\begin{cases}1&Y_u\neq Y_v,\\0&\text{otherwise}.\end{cases}</math>
Then <math>C(S,\bar{S})=\sum_{uv\in E}Y_{uv}</math>. Due to the linearity of expectation,
:<math>\mathbf{E}\left[C(S,\bar{S})\right]=\sum_{uv\in E}\mathbf{E}[Y_{uv}]=\sum_{uv\in E}\Pr[Y_u\neq Y_v]=\frac{|E|}{2}</math>.
The maximum cut of a graph is at most <math>|E|</math>. Thus, the algorithm returns in expectation a cut with size at least half of the maximum cut.


We then show how to dereandomize this algorithm by pairwise independent bits.
:<math>\mathbb{E}[f(X_i,i\in I)g(X_j,j\in J)]\leq \mathbb{E}[f(X_i,i\in I)]\mathbb{E}[g(X_j,j\in J)]</math>


Suppose that <math>|V|=n</math> and enumerate the <math>n</math> vertices by <math>v_1,v_2,\ldots, v_n</math> in an arbitrary order. Let <math>m=\lceil\log_2 (n+1)\rceil</math>. Sample <math>m</math> bits <math>X_1,\ldots, X_m\in\{0,1\}</math> uniformly and independently at random. Enumerate all nonempty subsets of <math>\{1,2,\ldots,m\}</math> by <math>S_1,S_2,\ldots,S_{2^m-1}</math>. For each vertex <math>v_j</math>, let <math>Y_{v_j}=\bigoplus_{i\in S_j}X_i</math>. The MAX-CUT algorithm uses these bits to construct the solution <math>S</math>:
for all non-decreasing function <math>f:\mathbb{R}^{|I|}\rightarrow\mathbb{R}</math> and <math>g:\mathbb{R}^{|J|}\rightarrow\mathbb{R}</math>.
* For <math>j=1,2,\ldots,n</math>, put <math>v_j</math> into <math>S</math> if <math>Y_{v_j}=1</math>.


We have shown that <math>Y_{v_j}</math>, <math>j=1,2,\ldots,n</math>, are uniform and pairwise independent. Thus we still have that <math>\Pr[Y_{u}\neq Y_{v}]=\frac{1}{2}</math>. The above analysis still holds, so that the algorithm returns in expectation a cut with size at least <math>\frac{|E|}{2}</math>.  
Intuitively, if a set of random variables is negatively related, then if any monotone increasing function <math>f</math> of one subset of variables increases then any other monotone increasing function <math>g</math> of a disjoint set of variables must decrease.


Finally, we notice that there are only <math>m=\lceil\log_2 (n+1)\rceil</math> total random bits in the new algorithm. We can enumerate all <math>2^m\le 2(n+1)</math> possible strings of <math>m</math> bits, run the above algorithm with the bit strings as the "random sources", and output the maximum cut returned. There must exist a bit string <math>X_1,\ldots, X_m\in\{0,1\}</math> on which the algorithm returns a cut of size <math>\ge \frac{|E|}{2}</math> (why?). This gives us a deterministic polynomial time (actually <math>O(n^2)</math> time) <math>1/2</math>-approximation algorithm.
'''(a)''' Let <math>X_1,\cdots,X_n</math> be a set of negatively associated random variables, show that for any non-negative non-decreasing function <math>f_i</math> where <math>i\in[n]</math>,


=Universal Hashing =
:<math>\mathbb{E}\left[\prod_{i\in[n]}f_i(X_i)\right]\leq\prod_{i\in[n]}\mathbb{E}[f_i(X_i)]</math>
Hashing is one of the oldest tools in Computer Science. Knuth's memorandum in 1963 on analysis of hash tables is now considered to be the birth of the area of analysis of algorithms.
* Knuth. Notes on "open" addressing, July 22 1963. Unpublished memorandum.


The idea of hashing is simple: an unknown set <math>S</math> of <math>n</math> data '''items''' (or keys) are drawn from a large '''universe''' <math>U=[N]</math> where <math>N\gg n</math>; in order to store <math>S</math> in a table of <math>M</math> entries (slots), we assume a consistent mapping (called a '''hash function''') from the universe <math>U</math> to a small range <math>[M]</math>.
'''(b)''' Show that the classical Chernoff bounds can be applied as is to <math>X=\sum_{i\in[n]}X_i</math> if the random variables <math>X_1,\cdots,X_n</math> are negatively associated. (Consider both the upper tail and the lower tail.)


This idea seems clever: we use a consistent mapping to deal with an arbitrary unknown data set. However, there is a fundamental flaw for hashing.
To establish the negative association condition, the following two properties are usually very helpful:
* For sufficiently large universe (<math>N> M(n-1)</math>), for any function, there exists a bad data set <math>S</math>, such that all items in <math>S</math> are mapped to the same entry in the table.


A simple use of pigeonhole principle can prove the above statement.  
*(Closure under products).If <math>\boldsymbol{X}=(X_1,\cdots,X_n)</math> is a set of negatively associated random variables, and <math>\boldsymbol{Y}=(Y_1,\cdots,Y_m)</math> is also a set of negatively associated random variables, but <math>\boldsymbol{X}</math> and <math>\boldsymbol{Y}</math> are mutually independent, then the augmented vector <math>(\boldsymbol{X},\boldsymbol{Y})=(X_1,\cdots,X_n,Y_1,\cdots,Y_m)</math> is a set of negatively associated random variables.
*(Disjoint monotone aggregation). Let <math>\boldsymbol{X}=(X_1,\cdots,X_n)</math> be a set of negatively associated random variables. Let <math>I_1,\cdots,I_k\subseteq[n]</math> be disjoint index sets for some positive integer <math>k</math>. For <math>j\in[k]</math>, let <math>f_j:\mathbb{R}^{|I_j|}\rightarrow\mathbb{R}</math> be functions that are all non–decreasing or all non–increasing, and define <math>Y_j=f_j(X_i,i\in I_j)</math>. Then, <math>\boldsymbol{Y}=(Y_1,\cdots,Y_k)</math> is also a set of negatively associated random variables. (That is, non–decreasing or non–increasing functions of disjoint subsets of negatively associated variables are also negatively associated.)


To overcome this situation, randomization is introduced into hashing. We assume that the hash function is a random mapping from <math>[N]</math> to <math>[M]</math>. In order to ease the analysis, the following ideal assumption is used:
We now consider the paradigmatic example of negative dependence: ''occupancy numbers'' in the balls and bins model. Again, <math>m</math> balls are thrown independently into <math>n</math> bins. However, this time, the balls and the bins are not necessarily identical: ball <math>k</math> has probability <math>p_{i,k}</math> of landing in bin <math>i</math>, for <math>k\in[m]</math> and <math>i\in[n]</math>, with <math>\sum_{i\in[n]}p_{i,k}=1</math> for each <math>k\in[m]</math>. Define indicator random variable <math>B_{i,k}</math> taking value one if and only if ball <math>k</math> lands in bin <math>i</math>. The occupancy numbers are <math>B_i=\sum_{k\in[m]}B_{i,k}</math>. That is, <math>B_i</math> denote the number of balls that land in bin <math>i</math>.
 
'''Simple Uniform Hash Assumption''' ('''SUHA''' or '''UHA''', a.k.a. the random oracle model):
:A ''uniform'' random function <math>h:[N]\rightarrow[M]</math> is available and the computation of <math>h</math> is efficient.
 
== Families of universal hash functions ==
The assumption of completely random function simplifies the analysis.  However, in practice, truly uniform random hash function is extremely expensive to compute and store. Thus, this simple assumption can hardly represent the reality.
 
There are two approaches for implementing practical hash functions. One is to use ''ad hoc'' implementations and wish they may work. The other approach is to construct class of hash functions which are efficient to compute and store but with weaker randomness guarantees, and then analyze the applications of hash functions based on this weaker assumption of randomness.
 
This route was took by Carter and Wegman in 1977 while they introduced universal families of hash functions.
 
{{Theorem
|Definition (universal hash families)|
:Let <math>[N]</math> be a universe with <math>N\ge M</math>. A family of hash functions <math>\mathcal{H}</math> from <math>[N]</math> to <math>[M]</math> is said to be '''<math>k</math>-universal''' if, for any items <math>x_1,x_2,\ldots,x_k\in [N]</math> and for a hash function <math>h</math> chosen uniformly at random from <math>\mathcal{H}</math>, we have
::<math>
\Pr[h(x_1)=h(x_2)=\cdots=h(x_k)]\le\frac{1}{M^{k-1}}.
</math>
 
:A family of hash functions <math>\mathcal{H}</math> from <math>[N]</math> to <math>[M]</math> is said to be '''strongly <math>k</math>-universal''' if, for any items <math>x_1,x_2,\ldots,x_k\in [N]</math>, any values <math>y_1,y_2,\ldots,y_k\in[M]</math>, and for a hash function <math>h</math> chosen uniformly at random from <math>\mathcal{H}</math>, we have
::<math>
\Pr[h(x_1)=y_1\wedge h(x_2)=y_2 \wedge \cdots \wedge h(x_k)=y_k]=\frac{1}{M^{k}}.
</math>
}}
In particular, for a 2-universal family <math>\mathcal{H}</math>, for any elements <math>x_1,x_2\in[N]</math>, a uniform random <math>h\in\mathcal{H}</math> has
:<math>
\Pr[h(x_1)=h(x_2)]\le\frac{1}{M}.
</math>
For a strongly 2-universal family <math>\mathcal{H}</math>, for any elements <math>x_1,x_2\in[N]</math> and any values <math>y_1,y_2\in[M]</math>, a uniform random <math>h\in\mathcal{H}</math> has
:<math>
\Pr[h(x_1)=y_1\wedge h(x_2)=y_2]=\frac{1}{M^2}.
</math>
This behavior is exactly the same as uniform random hash functions on any pair of inputs. For this reason, a strongly 2-universal hash family are also called pairwise independent hash functions.
 
== 2-universal hash families ==
 
The construction of pairwise independent random variables via modulo a prime introduced in Section 1 already provides a way of constructing a strongly 2-universal hash family.
 
Let <math>p</math> be a prime. The function <math>h_{a,b}:[p]\rightarrow [p]</math> is defined by
:<math>
h_{a,b}(x)=(ax+b)\bmod p,
</math>
and the family is
:<math>
\mathcal{H}=\{h_{a,b}\mid a,b\in[p]\}.
</math>
 
{{Theorem
|Lemma|
:<math>\mathcal{H}</math> is strongly 2-universal.
}}
{{Proof| In Section 1, we have proved the pairwise independence of the sequence of <math>(a i+b)\bmod p</math>, for <math>i=0,1,\ldots, p-1</math>, which directly implies that <math>\mathcal{H}</math> is strongly 2-universal.
}}
 
;The original construction of Carter-Wegman
What if we want to have hash functions from <math>[N]</math> to <math>[M]</math> for non-prime <math>N</math> and <math>M</math>? Carter and Wegman developed the following method.
 
Suppose that the universe is <math>[N]</math>, and the functions map <math>[N]</math> to <math>[M]</math>, where <math>N\ge M</math>. For some prime <math>p\ge N</math>, let
:<math>
h_{a,b}(x)=((ax+b)\bmod p)\bmod M,
</math>
and the family
:<math>
\mathcal{H}=\{h_{a,b}\mid 1\le a\le p-1, b\in[p]\}.
</math>
Note that unlike the first construction, now <math>a\neq 0</math>.
{{Theorem
|Lemma (Carter-Wegman)|
:<math>\mathcal{H}</math> is 2-universal.
}}
{{Proof| Due to the definition of <math>\mathcal{H}</math>, there are <math>p(p-1)</math> many different hash functions in <math>\mathcal{H}</math>, because each hash function in <math>\mathcal{H}</math> corresponds to a pair of <math>1\le a\le p-1</math> and <math>b\in[p]</math>. We only need to count for any particular pair of <math>x_1,x_2\in[N]</math> that <math>x_1\neq x_2</math>, the number of hash functions that <math>h(x_1)=h(x_2)</math>.
 
We first note that for any <math>x_1\neq x_2</math>, <math>a x_1+b\not\equiv a x_2+b \pmod p</math>. This is because <math>a x_1+b\equiv a x_2+b \pmod p</math> would imply that <math>a(x_1-x_2)\equiv 0\pmod p</math>, which can never happen since <math>1\le a\le p-1</math> and <math>x_1\neq x_2</math> (note that <math>x_1,x_2\in[N]</math> for an <math>N\le p</math>). Therefore, we can assume that <math>(a x_1+b)\bmod p=u</math> and <math>(a x_2+b)\bmod p=v</math> for <math>u\neq v</math>.
 
Due to the Chinese remainder theorem, for any <math>x_1,x_2\in[N]</math> that <math>x_1\neq x_2</math>, for any <math>u,v\in[p]</math> that <math>u\neq v</math>, there is exact one solution to <math>(a,b)</math> satisfying:
:<math>
\begin{cases}
a x_1+b \equiv u \pmod p\\
a x_2+b \equiv v \pmod p.
\end{cases}
</math>
After modulo <math>M</math>, every <math>u\in[p]</math> has at most <math>\lceil p/M\rceil -1</math> many <math>v\in[p]</math> that <math>v\neq u</math> but <math>v\equiv u\pmod M</math>. Therefore, for every pair of <math>x_1,x_2\in[N]</math> that <math>x_1\neq x_2</math>, there exist at most <math>p(\lceil p/M\rceil -1)\le p(p-1)/M</math> pairs of <math>1\le a\le p-1</math> and <math>b\in[p]</math> such that <math>((ax_1+b)\bmod p)\bmod M=((ax_2+b)\bmod p)\bmod M</math>, which means there are at most <math> p(p-1)/M</math> many hash functions <math>h\in\mathcal{H}</math> having <math>h(x_1)=h(x_2)</math> for <math>x_1\neq x_2</math>. For <math>h</math> uniformly chosen from <math>\mathcal{H}</math>, for any <math>x_1\neq x_2</math>,
:<math>
\Pr[h(x_1)=h(x_2)]\le \frac{p(p-1)/M}{p(p-1)}=\frac{1}{M}.
</math>
We prove that <math>\mathcal{H}</math> is 2-universal.
}}
 
;A construction used in practice
The main issue of Carter-Wegman construction is the efficiency. The mod operation is very slow, and has been so for more than 30 years.
 
The following construction is due to Dietzfelbinger ''et al''. It was published in 1997 and has been practically used in various applications of universal hashing.
 
The family of hash functions is from <math>[2^u]</math> to <math>[2^v]</math>. With a binary representation, the functions map binary strings of length <math>u</math> to binary strings of length <math>v</math>.
Let
:<math>
h_{a}(x)=\left\lfloor\frac{a\cdot x\bmod 2^u}{2^{u-v}}\right\rfloor,
</math>
and the family
:<math>
\mathcal{H}=\{h_{a}\mid a\in[2^v]\mbox{ and }a\mbox{ is odd}\}.
</math>
 
This family of hash functions does not exactly meet the requirement of 2-universal family. However,  Dietzfelbinger ''et al'' proved that <math>\mathcal{H}</math> is close to a 2-universal family. Specifically, for any input values <math>x_1,x_2\in[2^u]</math>, for a uniformly random <math>h\in\mathcal{H}</math>,
:<math>
\Pr[h(x_1)=h(x_2)]\le\frac{1}{2^{v-1}}.
</math>
So <math>\mathcal{H}</math> is within an approximation ratio of 2 to being 2-universal. The proof uses the fact that odd numbers are relative prime to a power of 2.
 
The function is extremely simple to compute in c language.
We exploit that C-multiplication (*) of unsigned u-bit numbers is done <math>\bmod 2^u</math>, and have a one-line C-code for computing the hash function:
h_a(x) = (a*x)>>(u-v)
The bit-wise shifting is a lot faster than modular. It explains the popularity of this scheme in practice than the original Carter-Wegman construction.
 
== Collision number ==
Consider a 2-universal family <math>\mathcal{H}</math> of hash functions from <math>[N]</math> to <math>[M]</math>. Let <math>h</math> be a hash function chosen uniformly from <math>\mathcal{H}</math>. For a fixed set <math>S</math> of <math>n</math> distinct elements from <math>[N]</math>, say <math>S=\{x_1,x_2,\ldots,x_n\}</math>, the elements are mapped to the hash values <math>h(x_1), h(x_2), \ldots, h(x_n)</math>. This can be seen as throwing <math>n</math> balls to <math>M</math> bins, with pairwise independent choices of bins.
 
As in the balls-into-bins with full independence, we are curious about the questions such as the birthday problem or the maximum load. These questions are interesting not only because they are natural to ask in a balls-into-bins setting, but in the context of hashing, they are closely related to the performance of hash functions.
 
The old techniques for analyzing balls-into-bins rely too much on the independence of the choice of the bin for each ball, therefore can hardly be extended to the setting of 2-universal hash families. However, it turns out several balls-into-bins questions can somehow be answered by analyzing a very natural quantity: the number of '''collision pairs'''.
 
A collision pair for hashing is a pair of elements <math>x_1,x_2\in S</math> which are mapped to the same hash value, i.e. <math>h(x_1)=h(x_2)</math>. Formally, for a fixed set of elements <math>S=\{x_1,x_2,\ldots,x_n\}</math>, for any <math>1\le i,j\le n</math>, let the random variable
:<math>
X_{ij}
=
\begin{cases}
1 & \text{if }h(x_i)=h(x_j),\\
0 & \text{otherwise.}
\end{cases}
</math>
The total number of collision pairs among the <math>n</math> items <math>x_1,x_2,\ldots,x_n</math> is
:<math>X=\sum_{i<j} X_{ij}.\,</math>
 
Since <math>\mathcal{H}</math> is 2-universal, for any <math>i\neq j</math>,
:<math>
\Pr[X_{ij}=1]=\Pr[h(x_i)=h(x_j)]\le\frac{1}{M}.
</math>
 
The expected number of collision pairs is
:<math>\mathbf{E}[X]=\mathbf{E}\left[\sum_{i<j}X_{ij}\right]=\sum_{i<j}\mathbf{E}[X_{ij}]=\sum_{i<j}\Pr[X_{ij}=1]\le{n\choose 2}\frac{1}{M}<\frac{n^2}{2M}.
</math>
 
In particular, for <math>n=M</math>, i.e. <math>n</math> items are mapped to <math>n</math> hash values by a pairwise independent hash function, the expected collision number is <math>\mathbf{E}[X]<\frac{n^2}{2M}=\frac{n}{2}</math>.
 
=== Birthday problem ===
In the context of hash functions, the birthday problem ask for the probability that there is no collision at all. Since collision is something that we want to avoid in the applications of hash functions, we would like to lower bound the probability of zero-collision, i.e. to upper bound the probability that there exists a collision pair.
 
The above analysis gives us an estimation on the expected number of collision pairs, such that <math>\mathbf{E}[X]<\frac{n^2}{2M}</math>. Apply the Markov's inequality, for <math>0<\epsilon<1</math>, we have
:<math>
\Pr\left[X\ge \frac{n^2}{2\epsilon M}\right]\le\Pr\left[X\ge \frac{1}{\epsilon}\mathbf{E}[X]\right]\le\epsilon.
</math>
 
When <math>n\le\sqrt{2\epsilon M}</math>, the number of collision pairs is <math>X\ge1</math> with probability at most <math>\epsilon</math>, therefore with probability at least <math>1-\epsilon</math>, there is no collision at all. Therefore, we have the following theorem.
{{Theorem
|Theorem|
:If <math>h</math> is chosen uniformly from a 2-universal family of hash functions mapping the universe <math>[N]</math> to <math>[M]</math> where <math>N\ge M</math>, then for any set <math>S\subset [N]</math> of <math>n</math> items, where <math>n\le\sqrt{2\epsilon M}</math>, the probability that there exits a collision pair is
::<math>
\Pr[\mbox{collision occurs}]\le\epsilon.
</math>
}}


Recall that for mutually independent choices of bins, for some <math>n=\sqrt{2M\ln(1/\epsilon)}</math>, the probability that a collision occurs is about <math>\epsilon</math>. For constant <math>\epsilon</math>, this gives an essentially same bound as the pairwise independent setting. Therefore,
'''(c)''' Intuitively, <math>B_1,\cdots,B_n</math> are negatively associated: if we know one bin has more balls, then clearly other bins are more likely to have less balls. Now, show that the occupancy numbers <math>B_1,\cdots,B_n</math> are negatively associated, formally.
the behavior of pairwise independent hash function is essentially the same as the uniform random hash function for the birthday problem. This is easy to understand, because birthday problem is about the behavior of collisions, and the definition of 2-universal hash function can be interpreted as "functions that the probability of collision is as low as a uniform random function".

Revision as of 03:15, 18 October 2021

  • 每道题目的解答都要有完整的解题过程。中英文不限。

Problem 1

Fix a universe [math]\displaystyle{ U }[/math] and two subset [math]\displaystyle{ A,B \subseteq U }[/math], both with size [math]\displaystyle{ n }[/math]. we create both Bloom filters [math]\displaystyle{ F_A }[/math]([math]\displaystyle{ F_B }[/math]) for [math]\displaystyle{ A }[/math] ([math]\displaystyle{ B }[/math]), using the same number of bits [math]\displaystyle{ m }[/math] and the same [math]\displaystyle{ k }[/math] hash functions.

  • Let [math]\displaystyle{ F_C = F_A \land F_B }[/math] be the Bloom filter formed by computing the bitwise AND of [math]\displaystyle{ F_A }[/math] and [math]\displaystyle{ F_B }[/math]. Argue that [math]\displaystyle{ F_C }[/math] may not always be the same as the Bloom filter that are created for [math]\displaystyle{ A\cap B }[/math].
  • Bloom filters can be used to estimate set differences. Express the expected number of bits where [math]\displaystyle{ F_A }[/math] and [math]\displaystyle{ F_B }[/math] differ as a function of [math]\displaystyle{ m, n, k }[/math] and [math]\displaystyle{ |A\cap B| }[/math].

Problem 2

In Balls-and-Bins model, we throw [math]\displaystyle{ m }[/math] balls independently and uniformly at random into [math]\displaystyle{ n }[/math] bins. We know that the maximum load is [math]\displaystyle{ \Theta\left(\frac{\log n}{\log\log n}\right) }[/math] with high probability when [math]\displaystyle{ m=\Theta(n) }[/math]. The two-choice paradigm is another way to throw [math]\displaystyle{ m }[/math] balls into [math]\displaystyle{ n }[/math] bins: each ball is thrown into the least loaded of two bins chosen independently and uniformly at random(it could be the case that the two chosen bins are exactly the same, and then the ball will be thrown into that bin), and breaks the tie arbitrarily. When [math]\displaystyle{ m=\Theta(n) }[/math], the maximum load of two-choice paradigm is known to be [math]\displaystyle{ \Theta(\log\log n) }[/math] with high probability, which is exponentially less than the maxim load when there is only one random choice. This phenomenon is called the power of two choices.

Here are the questions:

  • Consider the following paradigm: we throw [math]\displaystyle{ n }[/math] balls into [math]\displaystyle{ n }[/math] bins. The first [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins independently and uniformly at random. The remaining [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins using the two-choice paradigm. What is the maximum load with high probability? You need to give an asymptotically tight bound (in the form of [math]\displaystyle{ \Theta(\cdot) }[/math]).
  • Replace the above paradigm to the following: the first [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins using the two-choice paradigm while the remaining [math]\displaystyle{ \frac{n}{2} }[/math] balls are thrown into bins independently and uniformly at random. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.
  • Replace the above paradigm to the following: assume all [math]\displaystyle{ n }[/math] balls are thrown in a sequence. For every [math]\displaystyle{ 1\le i\le n }[/math], if [math]\displaystyle{ i }[/math] is odd, we throw [math]\displaystyle{ i }[/math]-th ball into bins independently and uniformly at random, otherwise, we throw it into bins using the two-choice paradigm. What is the maximum load with high probability in this case? You need to give an asymptotically tight bound.

Problem 3

Let [math]\displaystyle{ X }[/math] be a random variable with expectation [math]\displaystyle{ 0 }[/math] such that moment generating function [math]\displaystyle{ \mathbf{E}[\exp(t|X|)] }[/math] is finite for some [math]\displaystyle{ t \gt 0 }[/math]. We can use the following two kinds of tail inequalities for [math]\displaystyle{ X }[/math].

Chernoff Bound

[math]\displaystyle{ \begin{align} \mathbf{Pr}[|X| \geq \delta] \leq \min_{t \geq 0} \frac{\mathbf{E}[e^{t|X|}]}{e^{t\delta}} \end{align} }[/math]

[math]\displaystyle{ k }[/math]th-Moment Bound

[math]\displaystyle{ \begin{align} \mathbf{Pr}[|X| \geq \delta] \leq \frac{\mathbf{E}[|X|^k]}{\delta^k} \end{align} }[/math]
  • Show that for each [math]\displaystyle{ \delta }[/math], there exists a choice of [math]\displaystyle{ k }[/math] such that the [math]\displaystyle{ k }[/math]th-moment bound is stronger than the Chernoff bound.
 Hint: Consider the Taylor expansion of the moment generating function and apply the probabilistic method.
  • Why would we still prefer the Chernoff bound to the (seemingly) stronger [math]\displaystyle{ k }[/math]-th moment bound?

Problem 4

In this problem, we will explore the idea of negative association, show that the classical Chernoff bounds also hold for sum of negatively associated random variables, and see negative association in action by considering occupancy numbers in the balls and bins model. Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\cdots,X_n) }[/math] be a vector of random variables. We say random variables [math]\displaystyle{ \boldsymbol{X} }[/math] are negatively associated if for all disjoint subsets [math]\displaystyle{ I,J\subseteq[n] }[/math],

[math]\displaystyle{ \mathbb{E}[f(X_i,i\in I)g(X_j,j\in J)]\leq \mathbb{E}[f(X_i,i\in I)]\mathbb{E}[g(X_j,j\in J)] }[/math]

for all non-decreasing function [math]\displaystyle{ f:\mathbb{R}^{|I|}\rightarrow\mathbb{R} }[/math] and [math]\displaystyle{ g:\mathbb{R}^{|J|}\rightarrow\mathbb{R} }[/math].

Intuitively, if a set of random variables is negatively related, then if any monotone increasing function [math]\displaystyle{ f }[/math] of one subset of variables increases then any other monotone increasing function [math]\displaystyle{ g }[/math] of a disjoint set of variables must decrease.

(a) Let [math]\displaystyle{ X_1,\cdots,X_n }[/math] be a set of negatively associated random variables, show that for any non-negative non-decreasing function [math]\displaystyle{ f_i }[/math] where [math]\displaystyle{ i\in[n] }[/math],

[math]\displaystyle{ \mathbb{E}\left[\prod_{i\in[n]}f_i(X_i)\right]\leq\prod_{i\in[n]}\mathbb{E}[f_i(X_i)] }[/math]

(b) Show that the classical Chernoff bounds can be applied as is to [math]\displaystyle{ X=\sum_{i\in[n]}X_i }[/math] if the random variables [math]\displaystyle{ X_1,\cdots,X_n }[/math] are negatively associated. (Consider both the upper tail and the lower tail.)

To establish the negative association condition, the following two properties are usually very helpful:

  • (Closure under products).If [math]\displaystyle{ \boldsymbol{X}=(X_1,\cdots,X_n) }[/math] is a set of negatively associated random variables, and [math]\displaystyle{ \boldsymbol{Y}=(Y_1,\cdots,Y_m) }[/math] is also a set of negatively associated random variables, but [math]\displaystyle{ \boldsymbol{X} }[/math] and [math]\displaystyle{ \boldsymbol{Y} }[/math] are mutually independent, then the augmented vector [math]\displaystyle{ (\boldsymbol{X},\boldsymbol{Y})=(X_1,\cdots,X_n,Y_1,\cdots,Y_m) }[/math] is a set of negatively associated random variables.
  • (Disjoint monotone aggregation). Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\cdots,X_n) }[/math] be a set of negatively associated random variables. Let [math]\displaystyle{ I_1,\cdots,I_k\subseteq[n] }[/math] be disjoint index sets for some positive integer [math]\displaystyle{ k }[/math]. For [math]\displaystyle{ j\in[k] }[/math], let [math]\displaystyle{ f_j:\mathbb{R}^{|I_j|}\rightarrow\mathbb{R} }[/math] be functions that are all non–decreasing or all non–increasing, and define [math]\displaystyle{ Y_j=f_j(X_i,i\in I_j) }[/math]. Then, [math]\displaystyle{ \boldsymbol{Y}=(Y_1,\cdots,Y_k) }[/math] is also a set of negatively associated random variables. (That is, non–decreasing or non–increasing functions of disjoint subsets of negatively associated variables are also negatively associated.)

We now consider the paradigmatic example of negative dependence: occupancy numbers in the balls and bins model. Again, [math]\displaystyle{ m }[/math] balls are thrown independently into [math]\displaystyle{ n }[/math] bins. However, this time, the balls and the bins are not necessarily identical: ball [math]\displaystyle{ k }[/math] has probability [math]\displaystyle{ p_{i,k} }[/math] of landing in bin [math]\displaystyle{ i }[/math], for [math]\displaystyle{ k\in[m] }[/math] and [math]\displaystyle{ i\in[n] }[/math], with [math]\displaystyle{ \sum_{i\in[n]}p_{i,k}=1 }[/math] for each [math]\displaystyle{ k\in[m] }[/math]. Define indicator random variable [math]\displaystyle{ B_{i,k} }[/math] taking value one if and only if ball [math]\displaystyle{ k }[/math] lands in bin [math]\displaystyle{ i }[/math]. The occupancy numbers are [math]\displaystyle{ B_i=\sum_{k\in[m]}B_{i,k} }[/math]. That is, [math]\displaystyle{ B_i }[/math] denote the number of balls that land in bin [math]\displaystyle{ i }[/math].

(c) Intuitively, [math]\displaystyle{ B_1,\cdots,B_n }[/math] are negatively associated: if we know one bin has more balls, then clearly other bins are more likely to have less balls. Now, show that the occupancy numbers [math]\displaystyle{ B_1,\cdots,B_n }[/math] are negatively associated, formally.