高级算法 (Fall 2023)/Hashing and Sketching: Difference between revisions
Created page with "=Balls into Bins= Consider throwing <math>m</math> balls into <math>n</math> bins uniformly and independently at random. This is equivalent to a random mapping <math>f:[m]\to[n]</math>. Needless to say, random mapping is an important random model and may have many applications in Computer Science, e.g. hashing. We are concerned with the following three questions regarding the balls into bins model: * birthday problem: the probability that every bin contains at most one..." |
No edit summary |
||
Line 534: | Line 534: | ||
</math> | </math> | ||
Due to Markov's inequality, <math>\sum_{i=1}^nY_i^2=O(n)</math> with a constant probability. For any set <math>S</math>, we can find a suitable <math>h</math> after expected constant number of trials, and FKS can be constructed with guaranteed (instead of expected) linear-size which answers each search in constant time. | Due to Markov's inequality, <math>\sum_{i=1}^nY_i^2=O(n)</math> with a constant probability. For any set <math>S</math>, we can find a suitable <math>h</math> after expected constant number of trials, and FKS can be constructed with guaranteed (instead of expected) linear-size which answers each search in constant time. | ||
=Distinct Elements= | |||
Consider the following problem of '''counting distinct elements''': Suppose that <math>\Omega</math> is a sufficiently large universe. | |||
*'''Input:''' a sequence of (not necessarily distinct) elements <math>x_1,x_2,\ldots,x_n\in\Omega</math>; | |||
*'''Output:''' an estimation of the total number of distinct elements <math>z=|\{x_1,x_2,\ldots,x_n\}|</math>. | |||
A straightforward way of solving this problem is to maintain a dictionary data structure, which costs at least linear (<math>O(n)</math>) space. For ''big data'', where <math>n</math> is very large, this is still too expensive. However, due to an information-theoretical argument, linear space is necessary if you want to compute the ''exact'' value of <math>z</math>. | |||
Our goal is to relax the problem a little bit to significantly reduce the space cost by tolerating ''approximate'' answers. The form of approximation we consider is '''<math>(\epsilon,\delta)</math>-estimator'''. | |||
{{Theorem|<math>(\epsilon,\delta)</math>-estimator| | |||
: A random variable <math>\widehat{Z}</math> is an '''<math>(\epsilon,\delta)</math>-estimator''' of a quantity <math>z</math> if | |||
::<math>\Pr[\,(1-\epsilon)z\le \widehat{Z}\le (1+\epsilon)z\,]\ge 1-\delta</math>. | |||
: <math>\widehat{Z}</math> is said to be an '''unbiased estimator''' of <math>z</math> if <math>\mathbb{E}[\widehat{Z}]=z</math>. | |||
}} | |||
Usually <math>\epsilon</math> is called '''approximation error''' and <math>\delta</math> is called '''confidence error'''. | |||
We now present an elegant algorithm introduced by [https://en.wikipedia.org/wiki/Flajolet–Martin_algorithm Flajolet and Martin] in 1984. The algorithm can be implemented in [https://en.wikipedia.org/wiki/Streaming_algorithm '''data stream model''']: The input elements <math>x_1,x_2,\ldots,x_n</math> is presented to the algorithm one at a time, where the size of data <math>n</math> is unknown to the algorithm. The algorithm maintains a value <math>\widehat{Z}</math> which is an <math>(\epsilon,\delta)</math>-estimator of the total number of distinct elements <math>z=|\{x_1,x_2,\ldots,x_n\}|</math>, using only a small amount of memory space to memorize (with loss) the data set <math>\{x_1,x_2,\ldots,x_n\}</math>. | |||
A famous quotation of Flajolet describes the performance of this algorithm as: | |||
"Using only memory equivalent to 5 lines of printed text, you can estimate with a typical accuracy of 5% and in a single pass the total vocabulary of Shakespeare." | |||
== An estimator by hashing == | |||
Suppose that we can access to an idealized random hash function <math>h:\Omega\to[0,1]</math> which is uniformly distributed over all mappings from the universe <math>\Omega</math> to unit interval <math>[0,1]</math>. | |||
Recall that the input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> consists of <math>z=|\{x_1,x_2,\ldots,x_n\}|</math> distinct elements. These elements are mapped by the random function <math>h</math> to <math>z</math> hash values uniformly and independently distributed in <math>[0,1]</math>. We could maintain these hash values instead of the original elements, but this would still be too expensive because in the worst case we still have up to <math>n</math> distinct values to maintain. However, due to the idealized random hash function, the unit interval <math>[0,1]</math> will be partitioned into <math>z+1</math> subintervals by these <math>z</math> uniform and independent hash values. The typical length of the subinterval gives an estimation of the number <math>z</math>. | |||
{{Theorem|Proposition| | |||
:<math>\mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\frac{1}{z+1}</math>. | |||
}} | |||
{{Proof| | |||
The input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> consisting of <math>z</math> distinct elements are mapped to <math>z</math> random hash values uniformly and independently distributed in <math>[0,1]</math>. These <math>z</math> hash values partition the unit interval <math>[0,1]</math> into <math>z+1</math> subintervals <math>[0,v_1],[v_1,v_2],[v_2,v_3]\ldots,[v_{z-1},v_z],[v_z,1]</math>, where <math>v_i</math> denotes the <math>i</math>-th smallest value among all hash values <math>\{h(x_1),h(x_2),\ldots,h(x_n)\}</math>. Clearly we have | |||
:<math>v_1=\min_{1\le i\le n}h(x_i)</math>. | |||
Meanwhile, since all hash values are uniformly and independently distributed in <math>[0,1]</math>, the lengths of all subintervals <math>v_1, v_2-v_1, v_3-v_2,\ldots, v_z-v_{z-1}, 1-v_z</math> are identically distributed. By symmetry, they have the same expectation, therefore | |||
:<math> | |||
(z+1)\mathbb{E}[v_1]= | |||
\mathbb{E}[v_1]+\sum_{i=1}^{z-1}\mathbb{E}[v_{i+1}-v_i]+\mathbb{E}[1-v_z] | |||
=\mathbb{E}\left[v_1+(v_2-v_1)+(v_3-v_2)+\cdots+(v_{z}-v_{z-1})+1-v_z\right] | |||
=1, | |||
</math> | |||
which implies that | |||
:<math>\mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\mathbb{E}[v_1]=\frac{1}{z+1}</math>. | |||
}} | |||
The quantity <math>\min_{1\le i\le n}h(x_i)</math> can be computed with small space cost (for storing the current smallest hash value) by scan the input sequence in a single pass. Because as we proved its expectation is <math>\frac{1}{z+1}</math>, the smallest hash value <math>Y=\min_{1\le i\le n}h(x_i)</math> gives an unbiased estimator for <math>\frac{1}{z+1}</math>. However, <math>\frac{1}{Y}-1</math> is not necessarily a good estimator for <math>z</math>. Actually, it is a rather poor estimator. Consider for example when <math>z=1</math>, all input elements are the same. In this case, there is only one hash value and <math>Y=\min_{1\le i\le n}h(x_i)</math> is distributed uniformly over <math>[0,1]</math>, thus <math>\frac{1}{Y}-1</math> fails to be close enough to the correct answer 1 with high probability. | |||
==Flajolet-Martin algorithm== | |||
The reason that the above estimator of a single hash function performs poorly is that the unbiased estimator <math>\min_{1\le i\le n}h(x_i)</math> has large variance. So a natural way to reduce this variance is to have multiple independent hash functions and take the average. This is precisely what [https://en.wikipedia.org/wiki/Flajolet–Martin_algorithm '''''Flajolet-Martin algorithm'''''] does. | |||
Suppose that we can access to <math>k</math> independent random hash functions <math>h_1,h_2,\ldots,h_k</math>, where each <math>h_j:\Omega\to[0,1]</math> is uniformly and independently distributed over all functions mapping <math>\Omega</math> to <math>[0,1]</math>. Here <math>k</math> is a parameter to be fixed by the desired approximation error <math>\epsilon</math> and confidence error <math>\delta</math>. The ''Flajolet-Martin algorithm'' is given by the following pseudocode. | |||
{{Theorem|''Flajolet-Martin algorithm'' (Flajolet and Martin 1984)| | |||
:Suppose that <math>h_1,h_2,\ldots,h_k:\Omega\to[0,1]</math> are <math>k</math> uniform and independent random hash functions, where <math>k</math> is a parameter to be fixed later. | |||
----- | |||
:Scan the input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> in a single pass to compute: | |||
::* <math>Y_j=\min_{1\le i\le n}h_j(x_i)</math> for every <math>j=1,2,\ldots,k</math>; | |||
::* average value <math>\overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j</math>; | |||
:return <math>\widehat{Z}=\frac{1}{\overline{Y}}-1</math> as the estimator. | |||
}} | |||
The algorithm is easy to implement in data stream model, with a space cost of storing <math>k</math> hash values. The following theorem guarantees that the algorithm returns an <math>(\epsilon,\delta)</math>-estimator of the total number of distinct elements for a suitable <math>k=O\left(\frac{1}{\epsilon^2\delta}\right)</math>. | |||
{{Theorem|Theorem| | |||
:For any <math>\epsilon,\delta<1/2</math>, if <math>k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil</math> then the output <math>\widehat{Z}</math> always gives an <math>(\epsilon,\delta)</math>-estimator of the correct answer <math>z</math>. | |||
}} | |||
In the following we prove this main theorem for Flajolet-Martin algorithm. | |||
An obstacle to analyze the estimator <math>\widehat{Z}=\frac{1}{\overline{Y}}-1</math> is that it is a nonlinear function of <math>\overline{Y}</math> who is easier to analyze. Nevertheless, we observe that <math>\widehat{Z}</math> is an <math>(\epsilon,\delta)</math>-estimator of <math>z</math> as long as <math>\overline{Y}</math> is an <math>(\epsilon/2,\delta)</math>-estimator of <math>\frac{1}{z+1}</math>. This can be deduced by just verifying the following: | |||
:<math>\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1} \implies (1-\epsilon)z\le\frac{1}{\overline{Y}}-1\le (1+\epsilon)z</math>, | |||
for <math>\epsilon<\frac{1}{2}</math>. Therefore, | |||
:<math>\Pr\left[\,(1-\epsilon)z\le \widehat{Z} \le (1+\epsilon)z\,\right]\ge \Pr\left[\,\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1}\,\right] | |||
=\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]</math>. | |||
It is then sufficient to show that <math>\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta</math> for proving the main theorem above. We will see that this is equivalent to show the concentration inequality | |||
:<math>\Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta\quad\qquad({\color{red}*})</math>. | |||
{{Theorem|Lemma| | |||
:The followings hold for each <math>Y_j</math>, <math>j=1,2\ldots,k</math>, and <math>\overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j</math>: | |||
:*<math>\mathbb{E}\left[\overline{Y}\right]=\mathbb{E}\left[Y_j\right]=\frac{1}{z+1}</math>; | |||
:*<math>\mathbf{Var}\left[Y_j\right]\le\frac{1}{(z+1)^2}</math>, and consequently <math>\mathbf{Var}\left[\overline{Y}\right]\le\frac{1}{k(z+1)^2}</math>. | |||
}} | |||
{{Proof| | |||
As in the case of single hash function, by symmetry it holds that <math>\mathbb{E}[Y_j]=\frac{1}{z+1}</math> for every <math>j=1,2,\ldots,k</math>. Therefore, | |||
:<math>\mathbb{E}\left[\overline{Y}\right]=\frac{1}{k}\sum_{j=1}^k\mathbb{E}[Y_j]=\frac{1}{z+1}</math>. | |||
Recall that each <math>Y_j</math> is the minimum of <math>z</math> random hash values uniformly and independently distributed over <math>[0,1]</math>. By geometry probability, it holds that for any <math>y\in[0,1]</math>, | |||
:<math>\Pr[Y_j>y]=(1-y)^z</math>, | |||
which means <math>\Pr[Y_j\le y]=1-(1-y)^z</math>. Taking the derivative with respect to <math>y</math>, we obtain the probability density function of random variable <math>Y_j</math>, which is <math>z(1-y)^{z-1}</math>. | |||
We then compute the second moment. | |||
:<math>\mathbb{E}[Y_j^2]=\int^{1}_0y^2z(1-y)^{z-1}\,\mathrm{d}y=\frac{2}{(z+1)(z+2)}</math>. | |||
The variance is bounded as | |||
:<math>\mathbf{Var}\left[Y_j\right]=\mathbb{E}\left[Y_j^2\right]-\mathbb{E}\left[Y_j\right]^2=\frac{2}{(z+1)(z+2)}-\frac{1}{(z+1)^2}\le\frac{1}{(z+1)^2}</math>. | |||
Due to the (pairwise) independence between <math>Y_j</math>'s, | |||
::<math>\mathbf{Var}\left[\overline{Y}\right]=\mathbf{Var}\left[\frac{1}{k}\sum_{j=1}^kY_j\right]=\frac{1}{k^2}\sum_{j=1}^k\mathbf{Var}\left[Y_j\right]\le \frac{1}{k(z+1)^2}</math>. | |||
}} | |||
We resume to prove the inequality <math>({\color{red}*})</math>. By [[高级算法_(Fall_2018)/Basic_tail_inequalities#Chebyshev.27s_inequality|Chebyshev's inequality]], it holds that | |||
:<math>\Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|> \frac{\epsilon/2}{z+1}\,\right] | |||
\le\frac{4}{\epsilon^2}(z+1)^2\mathbf{Var}\left[\overline{Y}\right] | |||
\le\frac{4}{\epsilon^2k}</math>. | |||
When <math>k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil</math>, this probability is at most <math>\delta</math>. The inequality <math>({\color{red}*})</math> is proved. As we discussed above, this proves the above main theorem for Flajolet-Martin algorithm. | |||
==Uniform Hash Assumption (UHA)== | |||
In above we assume we can access to idealized random hash functions <math>h:\Omega\to[0,1]</math> with real values. With a more careful calculation, one can show the same performance guarantee for hash functions with discrete values as <math>h:\Omega\to[M]</math> where <math>M=\mathrm{poly}(n)</math>, that is, the hash values are strings of <math>O(\log n)</math> bits. | |||
Even with such improved analysis, a uniform random discrete function in form of <math>h:[N]\to[M]</math> is not really efficient to store or to compute. By an information-theretical argument, it takes at least <math>\Omega(N\log M)</math> bits to represent such a random hash function because this is the entropy of such uniform random function. | |||
For the convenience of analysis, it is common to assume the following '''Uniform Hash Assumption (UHA)''' also known as '''Simple Uniform Hash Assumption (SUHA)'''. | |||
{{Theorem|Uniform Hash Assumption (UHA)| | |||
:A ''uniform'' random function <math>h:[N]\rightarrow[M]</math> is available and the computation of <math>h</math> is efficient. | |||
}} | |||
= Set Membership= | |||
A basic question in Computer Science is: | |||
:"<math>\mbox{Is }x\in S?</math>" | |||
for a set <math>S</math> and an element <math>x</math>. This is the '''set membership''' problem. | |||
Formally, given an arbitrary set <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math>, we want to use a succinct '''data structure''' to represent this set <math>S</math>, so that upon each '''query''' of any element <math>x</math> from the universe <math>[N]</math>, the question of whether <math>x\in S</math> is efficiently answered. The complexity of such data structure is measured in two-fold: | |||
* '''space cost''': size of the data structure to represent a set <math>S</math> of size <math>n</math>; | |||
* '''time cost''': time complexity of answering each query by accessing to the data structure. | |||
Suppose that the universe <math>\Omega</math> is of size <math>N</math>. Clearly, the membership problem can be solved by a '''dictionary data structure''', e.g.: | |||
* '''sorted table / balanced search tree''': with space cost <math>O(n\log N)</math> bits and time cost <math>O(\log n)</math>; | |||
* '''perfect hashing''' of ''Fredman, Komlós & Szemerédi'': with space cost <math>O(n\log N)</math> bits and time cost <math>O(1)</math>. | |||
Note that <math>\log{N\choose n}=\Theta\left(n\log \frac{N}{n}\right)</math> is the entropy of sets <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math> of size <math>N</math>. Therefore it is necessary to use so many bits to represent a set without losing any information. Nevertheless, we can do better than this if we use a loss representation of the input set <math>S</math> and tolerate a bounded error in answering queries. Such lossy representation of data is sometimes called a '''''sketch'''''. | |||
== Bloom filter == | |||
The Bloom filter is a space-efficient hash table that solves the '''approximate membership''' problem with one-sided error (''false positive''). | |||
Given a set <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math>, a Bloom filter consists of an array <math>A</math> of <math>cn</math> bits, and <math>k</math> hash functions <math>h_1,h_2,\ldots,h_k</math> map <math>\Omega</math> to <math>[cn]</math>, where both <math>c</math> and <math>k</math> are parameters that we can try to optimize later. | |||
As before, we assume the '''Uniform Hash Assumption (UHA)''': <math>h_1,h_2,\ldots,h_k</math> are mutually independent hash function where each <math>h_i</math> is a uniform random hash function <math>h_i:\Omega\to[cn]</math>. | |||
The Bloom filter works as follows: | |||
{{Theorem|''Bloom filter'' (Bloom 1970)| | |||
:Suppose <math>h_1,h_2,\ldots,h_k:\Omega\to[cn]</math> are uniform and independent random hash functions. | |||
----- | |||
:'''Data structure construction:''' Given a set <math>S\subset\Omega</math> of size <math>n=|S|</math>, the data structure is a Boolean array <math>A</math> of <math>cn</math> bits constructed as | |||
:* initialize all <math>cn</math> bits of the Boolean array <math>A</math> to 0; | |||
:* for each <math>x\in S</math>, let <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math>. | |||
---- | |||
:'''Query resolution:''' Upon each query of an arbitrary <math>x\in\Omega</math>, | |||
:* answer "yes" if <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math> and "no" if otherwise. | |||
}} | |||
The Boolean array is our data structure, whose size is <math>cn</math> bits. With Uniform Hash Assumption (UHA), the time cost of the data structure for answering each query is <math>O(k)</math>. | |||
When the answer returned by the algorithm is "no", it holds that <math>A[h_i(x)]=0</math> for some <math>1\le i\le k</math>, in which case the query <math>x</math> must not belong to the set <math>S</math>. Thus, the Bloom filter has no false negatives. | |||
On the other hand, when the answer returned by the algorithm is "yes", <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math>. It is still possible for some <math>x\not\in S</math> that all bits <math>A[h_i(x)]</math> are set by elements in <math>S</math>. We want to bound such false positive, that is, the following probability for an <math>x\not\in S</math>: | |||
:<math>\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]</math>, | |||
which by independence between different hash functions and by symmetry is equal to: | |||
:<math>\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k</math>. | |||
For an element <math>x\not\in S</math>, its hash value <math>h_1(x)</math> is independent of all hash values <math>h_i(y)</math> for all <math>1\le i\le k</math> and all <math>y\in S</math>. This is due to the Uniform Hash Assumption. The hash value <math>h_1(x)</math> of <math>x\not\in S</math> is then independent of the content of the array <math>A</math>. Therefore, the probability of this position <math>A[h_1(x)]</math> missed by all <math>kn</math> updates to the Boolean array <math>A</math> caused by all <math>n</math> elements in <math>S</math> is: | |||
:<math> | |||
\Pr[\, A[h_1(x)]=0\,]=\left(1-\frac{1}{cn}\right)^{kn}\approx e^{-k/c}. | |||
</math> | |||
Putting everything together, for any <math>x\not\in S</math>, the false positive is bounded as: | |||
:<math> | |||
\begin{align} | |||
\Pr[\,\text{wrongly answer ''yes''}\,] | |||
&=\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]\\ | |||
&=\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k\\ | |||
&=\left(1-\left(1-\frac{1}{cn}\right)^{kn}\right)^k\\ | |||
&\approx \left(1- e^{-k/c}\right)^k | |||
\end{align} | |||
</math> | |||
which is <math>(0.6185)^c</math> when <math>k=c\ln 2</math>. | |||
Bloom filter solves the membership query with a small constant error of false positives using a data structure of <math>O(n)</math> bits which answers each query with <math>O(1)</math> time cost. | |||
= Frequency Estimation= | |||
Suppose that <math>\Omega</math> is the data universe. The '''frequency estimation''' problem is defined as follows. | |||
*'''Data:''' a sequence of (not necessarily distinct) elements <math>x_1,x_2,\ldots,x_n\in\Omega</math>; | |||
*'''Query:''' an element <math>x\in\Omega</math>; | |||
*'''Output:''' an estimation <math>\hat{f}_x</math> of the frequency <math>f_x\triangleq|\{i\mid x_i=x\}|</math> of <math>x</math> in input data. | |||
We still want to give an algorithm in the data stream model: the algorithm scan the input sequence <math>x_1,x_2,\ldots,x_n</math> to construct a succinct data structure, such that upon each query of <math>x\in\Omega</math>, the algorithm returns an estimation of the frequency <math>f_x</math>. | |||
Clearly this problem can always be solved by storing all appeared distinct elements along with their frequencies. However, the space cost of this straightforward solution is rather high. Instead, we want to use a lossy representation (a ''sketch'') of input data which uses significantly less space but can still answer queries with tolarable accuracy. | |||
Formally, upon each query of <math>x\in\Omega</math>, the algorithm should return an answer <math>\hat{f}_x</math> satisfying: | |||
:<math>\Pr\left[\,\left|\hat{f}_x-f_x\right|\le \epsilon n\,\right]\ge 1-\delta</math>. | |||
Note that this notion of approximation is with bounded ''additive'' error which is weaker than the notion of <math>(\epsilon,\delta)</math>-estimator, whose error bound is ''multiplicative''. | |||
With such weak accuracy guarantee, its is possible to give a succinct data structure whose size is determined only by the error bounds <math>\epsilon</math> and <math>\delta</math> but independent of <math>n</math>, because only the frequencies of those '''heavy hitters''' (elements <math>x</math> with high frequencies <math>f_x>\epsilon n</math>) need to be memorized, and there are at most <math>1/\epsilon</math> many such heavy hitters. | |||
== Count-min sketch== | |||
The [https://en.wikipedia.org/wiki/Count–min_sketch count-min sketch] given by Cormode and Muthukrishnan is an elegant data structure for frequency estimation. | |||
The data structure is a two-dimensional <math>k\times m</math> integer array, where <math>k</math> and <math>m</math> are two parameters to be determined by the error bounds <math>\epsilon</math> and <math>\delta</math>. We still adopt the Uniform Hash Assumption to assume that we have access to <math>k</math> mutually independent uniform random hash functions <math>h_1,h_2,\ldots,h_k:\Omega\to[m]</math>. | |||
{{Theorem|''Count-min sketch'' (Cormode and Muthukrishnan 2003)| | |||
:Suppose <math>h_1,h_2,\ldots,h_k:\Omega\to[m]</math> are uniform and independent random hash functions. | |||
----- | |||
:'''Data structure construction:''' Given a sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math>, the data structure is a two-dimensional <math>k\times m</math> integer array <math>CMS[k][m]</math> constructed as | |||
:*initialize all entries of <math>CMS[k][m]</math> to 0; | |||
:*for <math>i=1,2,\ldots,n</math>, upon receiving <math>x_i</math>: | |||
::: for every <math>1\le j\le k</math>, evaluate <math>h_j(x_i)</math> and <math>CMS[j][h_j(x_i)]++</math>. | |||
---- | |||
:'''Query resolution:''' Upon each query of an arbitrary <math>x\in\Omega</math>, | |||
:* return <math>\hat{f}_x=\min_{1\le j\le k}CMS[j][h_j(x)]</math>. | |||
}} | |||
It is easy to see that the space cost of count-min sketch is <math>O(km)</math> memory words, or <math>O(km\log n)</math> bits. Each query is answered within time cost <math>O(k)</math>, assuming that an evaluation of hash function can be done in unit or constant time. We then analyze the error bounds. | |||
First, it is easy to observe that for any query <math>x\in\Omega</math> and every hash function <math>1\le j\le k</math>, it always holds for the corresponding entry in the count-min sketch | |||
:<math>CMS[j][h_j(x)]\ge f_x</math>, | |||
because the appearances of element <math>x</math> in the input sequence contribute at least <math>f_x</math> to the value of <math>CMS[j][h_j(x)]</math>. | |||
Therefore, for any query <math>x\in\Omega</math> it always holds for the answer <math>\hat{f}_x=\min_{1\le j\le k}CMS[j][h_j(x)]\ge f_x</math>, which means | |||
:<math>\Pr\left[\,\left|\hat{f}_x- f_x\right|\ge\epsilon n\,\right]=\Pr\left[\,\hat{f}_x- f_x\ge\epsilon n\,\right]=\prod_{j=1}^k\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,],\quad\qquad({\color{red}\diamondsuit})</math> | |||
where the second equation is due to the mutual independence of random hash functions <math>h_1,h_2,\ldots,h_k</math>. | |||
It remains to upper bound the probability <math>\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,]</math>, which can be done by calculating the expectation of <math>CMS[j][h_j(x)]</math>. | |||
{{Theorem|Proposition| | |||
:For any <math>x\in\Omega</math> and every <math>1\le j\le k</math>, it holds that <math>\mathbb{E}\left[CMS[j][h_j(x)]\right]\le f_x+\frac{n}{m}</math>. | |||
}} | |||
{{Proof| | |||
The value of <math>CMS[j][h_j(x)]</math> is constituted by the frequency <math>f_x</math> of <math>x</math> and the frequencies <math>f_y</math> of all other elements <math>y\neq x</math> among <math>x_1,x_2,\ldots,x_n</math>, thus | |||
:<math> | |||
\begin{align} | |||
CMS[j][h_j(x)] | |||
&=f_x+\sum_{\scriptstyle y\in\{x_1,\ldots,x_n\}\setminus\{x\}\atop\scriptstyle h_j(y)=h_j(x)} f_y\\ | |||
&=f_x+\sum_{y\in\{x_1,\ldots,x_n\}\setminus\{x\}} f_y \cdot I[h_j(y)=h_j(x)] | |||
\end{align} | |||
</math> | |||
where <math>I[h_j(y)=h_j(x)]</math> denotes the Boolean random variable that indicates the occurrence of event <math>h_j(y)=h_j(x)</math>. | |||
By linearity of expectation, | |||
:<math>\mathbb{E}[CMS[j][h_j(x)]]=f_x+\sum_{y\in\{x_1,x_2,\ldots,x_n\}\setminus\{x\}} f_y \cdot \Pr[h_j(y)=h_j(x)]</math>. | |||
Due to Uniform Hash Assumption (UHA), <math>h_j:\Omega\to[m]</math> is a uniform random function. For any <math>y\neq x</math>, the probability of hash collision is | |||
:<math>\Pr[h_j(y)=h_j(x)]=\frac{1}{m}</math>. | |||
Therefore, | |||
:<math> | |||
\begin{align} | |||
\mathbb{E}[CMS[j][h_j(x)]] | |||
&=f_x+\frac{1}{m}\sum_{y\in\{x_1,\ldots,x_n\}\setminus\{x\}} f_y \\ | |||
&\le f_x+\frac{1}{m}\sum_{y\in\{x_1,\ldots,x_n\}} f_y\\ | |||
&=f_x+\frac{n}{m}, | |||
\end{align} | |||
</math> | |||
where the last equation is due to the obvious identity <math>\sum_{y\in\{x_1,\ldots,x_n\}}f_y=n</math>. | |||
}} | |||
The above proposition shows that for any <math>x\in\Omega</math> and every <math>1\le j\le k</math> | |||
:<math>\mathbb{E}\left[CMS[j][h_j(x)]-f_x\right]\le \frac{n}{m}</math>. | |||
Recall that <math>CMS[j][h_j(x)]\ge f_x</math> always holds, thus <math>CMS[j][h_j(x)]-f_x</math> is a positive random variable. By Markov's inequality, we have | |||
:<math>\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,]\le \frac{1}{\epsilon m}</math>. | |||
Combining with above equation <math>({\color{red}\diamondsuit})</math>, we have | |||
:<math>\Pr\left[\,\left|\hat{f}_x- f_x\right|\ge\epsilon n\,\right]=(\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,])^k\le \frac{1}{(\epsilon m)^k}</math>. | |||
By setting <math>m=\left\lceil\frac{\mathrm{e}}{\epsilon}\right\rceil</math> and <math>k=\left\lceil\ln\frac{1}{\delta}\right\rceil</math>, the above error probability is bounded as <math>\frac{1}{(\epsilon m)^k}\le\delta</math>. | |||
For any positive <math>\epsilon</math> and <math>\delta</math>, the count-min sketch gives a data structure of size <math>O(km)=O\left(\frac{1}{\epsilon}\log\frac{1}{\delta}\right)</math> (in memory words) and answering each query <math>x\in\Omega</math> in time <math>O(k)=O\left(\frac{1}{\epsilon}\right)</math> with the following accuracy guarantee: | |||
:<math>\Pr\left[\,\left|\hat{f}_x- f_x\right|\le\epsilon n\,\right]\ge 1-\delta</math>. |
Revision as of 07:39, 15 October 2023
Balls into Bins
Consider throwing [math]\displaystyle{ m }[/math] balls into [math]\displaystyle{ n }[/math] bins uniformly and independently at random. This is equivalent to a random mapping [math]\displaystyle{ f:[m]\to[n] }[/math]. Needless to say, random mapping is an important random model and may have many applications in Computer Science, e.g. hashing.
We are concerned with the following three questions regarding the balls into bins model:
- birthday problem: the probability that every bin contains at most one ball (the mapping is 1-1);
- coupon collector problem: the probability that every bin contains at least one ball (the mapping is on-to);
- occupancy problem: the maximum load of bins.
Birthday Problem
There are [math]\displaystyle{ m }[/math] students in the class. Assume that for each student, his/her birthday is uniformly and independently distributed over the 365 days in a years. We wonder what the probability that no two students share a birthday.
Due to the pigeonhole principle, it is obvious that for [math]\displaystyle{ m\gt 365 }[/math], there must be two students with the same birthday. Surprisingly, for any [math]\displaystyle{ m\gt 57 }[/math] this event occurs with more than 99% probability. This is called the birthday paradox. Despite the name, the birthday paradox is not a real paradox.
We can model this problem as a balls-into-bins problem. [math]\displaystyle{ m }[/math] different balls (students) are uniformly and independently thrown into 365 bins (days). More generally, let [math]\displaystyle{ n }[/math] be the number of bins. We ask for the probability of the following event [math]\displaystyle{ \mathcal{E} }[/math]
- [math]\displaystyle{ \mathcal{E} }[/math]: there is no bin with more than one balls (i.e. no two students share birthday).
We first analyze this by counting. There are totally [math]\displaystyle{ n^m }[/math] ways of assigning [math]\displaystyle{ m }[/math] balls to [math]\displaystyle{ n }[/math] bins. The number of assignments that no two balls share a bin is [math]\displaystyle{ {n\choose m}m! }[/math].
Thus the probability is given by:
- [math]\displaystyle{ \begin{align} \Pr[\mathcal{E}] = \frac{{n\choose m}m!}{n^m}. \end{align} }[/math]
Recall that [math]\displaystyle{ {n\choose m}=\frac{n!}{(n-m)!m!} }[/math]. Then
- [math]\displaystyle{ \begin{align} \Pr[\mathcal{E}] = \frac{{n\choose m}m!}{n^m} = \frac{n!}{n^m(n-m)!} = \frac{n}{n}\cdot\frac{n-1}{n}\cdot\frac{n-2}{n}\cdots\frac{n-(m-1)}{n} = \prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right). \end{align} }[/math]
There is also a more "probabilistic" argument for the above equation. Consider again that [math]\displaystyle{ m }[/math] students are mapped to [math]\displaystyle{ n }[/math] possible birthdays uniformly at random.
The first student has a birthday for sure. The probability that the second student has a different birthday from the first student is [math]\displaystyle{ \left(1-\frac{1}{n}\right) }[/math]. Given that the first two students have different birthdays, the probability that the third student has a different birthday from the first two students is [math]\displaystyle{ \left(1-\frac{2}{n}\right) }[/math]. Continuing this on, assuming that the first [math]\displaystyle{ k-1 }[/math] students all have different birthdays, the probability that the [math]\displaystyle{ k }[/math]th student has a different birthday than the first [math]\displaystyle{ k-1 }[/math], is given by [math]\displaystyle{ \left(1-\frac{k-1}{n}\right) }[/math]. By the chain rule, the probability that all [math]\displaystyle{ m }[/math] students have different birthdays is:
- [math]\displaystyle{ \begin{align} \Pr[\mathcal{E}]=\left(1-\frac{1}{n}\right)\cdot \left(1-\frac{2}{n}\right)\cdots \left(1-\frac{m-1}{n}\right) &= \prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right), \end{align} }[/math]
which is the same as what we got by the counting argument.
There are several ways of analyzing this formular. Here is a convenient one: Due to Taylor's expansion, [math]\displaystyle{ e^{-k/n}\approx 1-k/n }[/math]. Then
- [math]\displaystyle{ \begin{align} \prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right) &\approx \prod_{k=1}^{m-1}e^{-\frac{k}{n}}\\ &= \exp\left(-\sum_{k=1}^{m-1}\frac{k}{n}\right)\\ &= e^{-m(m-1)/2n}\\ &\approx e^{-m^2/2n}. \end{align} }[/math]
The quality of this approximation is shown in the Figure.
Therefore, for [math]\displaystyle{ m=\sqrt{2n\ln \frac{1}{\epsilon}} }[/math], the probability that [math]\displaystyle{ \Pr[\mathcal{E}]\approx\epsilon }[/math].
Coupon Collector
Suppose that a chocolate company releases [math]\displaystyle{ n }[/math] different types of coupons. Each box of chocolates contains one coupon with a uniformly random type. Once you have collected all [math]\displaystyle{ n }[/math] types of coupons, you will get a prize. So how many boxes of chocolates you are expected to buy to win the prize?
The coupon collector problem can be described in the balls-into-bins model as follows. We keep throwing balls one-by-one into [math]\displaystyle{ n }[/math] bins (coupons), such that each ball is thrown into a bin uniformly and independently at random. Each ball corresponds to a box of chocolate, and each bin corresponds to a type of coupon. Thus, the number of boxes bought to collect [math]\displaystyle{ n }[/math] coupons is just the number of balls thrown until none of the [math]\displaystyle{ n }[/math] bins is empty.
Theorem - Let [math]\displaystyle{ X }[/math] be the number of balls thrown uniformly and independently to [math]\displaystyle{ n }[/math] bins until no bin is empty. Then [math]\displaystyle{ \mathbf{E}[X]=nH(n) }[/math], where [math]\displaystyle{ H(n) }[/math] is the [math]\displaystyle{ n }[/math]th harmonic number.
Proof. Let [math]\displaystyle{ X_i }[/math] be the number of balls thrown while there are exactly [math]\displaystyle{ i-1 }[/math] nonempty bins, then clearly [math]\displaystyle{ X=\sum_{i=1}^n X_i }[/math]. When there are exactly [math]\displaystyle{ i-1 }[/math] nonempty bins, throwing a ball, the probability that the number of nonempty bins increases (i.e. the ball is thrown to an empty bin) is
- [math]\displaystyle{ p_i=1-\frac{i-1}{n}. }[/math]
[math]\displaystyle{ X_i }[/math] is the number of balls thrown to make the number of nonempty bins increases from [math]\displaystyle{ i-1 }[/math] to [math]\displaystyle{ i }[/math], i.e. the number of balls thrown until a ball is thrown to a current empty bin. Thus, [math]\displaystyle{ X_i }[/math] follows the geometric distribution, such that
- [math]\displaystyle{ \Pr[X_i=k]=(1-p_i)^{k-1}p_i }[/math]
For a geometric random variable, [math]\displaystyle{ \mathbf{E}[X_i]=\frac{1}{p_i}=\frac{n}{n-i+1} }[/math].
Applying the linearity of expectations,
- [math]\displaystyle{ \begin{align} \mathbf{E}[X] &= \mathbf{E}\left[\sum_{i=1}^nX_i\right]\\ &= \sum_{i=1}^n\mathbf{E}\left[X_i\right]\\ &= \sum_{i=1}^n\frac{n}{n-i+1}\\ &= n\sum_{i=1}^n\frac{1}{i}\\ &= nH(n), \end{align} }[/math]
where [math]\displaystyle{ H(n) }[/math] is the [math]\displaystyle{ n }[/math]th Harmonic number, and [math]\displaystyle{ H(n)=\ln n+O(1) }[/math]. Thus, for the coupon collectors problem, the expected number of coupons required to obtain all [math]\displaystyle{ n }[/math] types of coupons is [math]\displaystyle{ n\ln n+O(n) }[/math].
- [math]\displaystyle{ \square }[/math]
Only knowing the expectation is not good enough. We would like to know how fast the probability decrease as a random variable deviates from its mean value.
Theorem - Let [math]\displaystyle{ X }[/math] be the number of balls thrown uniformly and independently to [math]\displaystyle{ n }[/math] bins until no bin is empty. Then [math]\displaystyle{ \Pr[X\ge n\ln n+cn]\lt e^{-c} }[/math] for any [math]\displaystyle{ c\gt 0 }[/math].
Proof. For any particular bin [math]\displaystyle{ i }[/math], the probability that bin [math]\displaystyle{ i }[/math] is empty after throwing [math]\displaystyle{ n\ln n+cn }[/math] balls is - [math]\displaystyle{ \left(1-\frac{1}{n}\right)^{n\ln n+cn} \lt e^{-(\ln n+c)} =\frac{1}{ne^c}. }[/math]
By the union bound, the probability that there exists an empty bin after throwing [math]\displaystyle{ n\ln n+cn }[/math] balls is
- [math]\displaystyle{ \Pr[X\ge n\ln n+cn] \lt n\cdot \frac{1}{ne^c} =e^{-c}. }[/math]
- [math]\displaystyle{ \square }[/math]
Stable Marriage
We now consider the famous stable marriage problem or stable matching problem (SMP). This problem captures two aspects: allocations (matchings) and stability, two central topics in economics.
An instance of stable marriage consists of:
- [math]\displaystyle{ n }[/math] men and [math]\displaystyle{ n }[/math] women;
- each person associated with a strictly ordered preference list containing all the members of the opposite sex.
Formally, let [math]\displaystyle{ M }[/math] be the set of [math]\displaystyle{ n }[/math] men and [math]\displaystyle{ W }[/math] be the set of [math]\displaystyle{ n }[/math] women. Each man [math]\displaystyle{ m\in M }[/math] is associated with a permutation [math]\displaystyle{ p_m }[/math] of elemets in [math]\displaystyle{ W }[/math] and each woman [math]\displaystyle{ w\in W }[/math] is associated with a permutation [math]\displaystyle{ p_w }[/math] of elements in [math]\displaystyle{ M }[/math].
A matching is a one-one correspondence [math]\displaystyle{ \phi:M\rightarrow W }[/math]. We said a man [math]\displaystyle{ m }[/math] and a woman [math]\displaystyle{ w }[/math] are partners in [math]\displaystyle{ \phi }[/math] if [math]\displaystyle{ w=\phi(m) }[/math].
Definition (stable matching) - A pair [math]\displaystyle{ (m,w) }[/math] of a man and woman is a blocking pair in a matching [math]\displaystyle{ \phi }[/math] if [math]\displaystyle{ m }[/math] and [math]\displaystyle{ w }[/math] are not partners in [math]\displaystyle{ \phi }[/math] but
- [math]\displaystyle{ m }[/math] prefers [math]\displaystyle{ w }[/math] to [math]\displaystyle{ \phi(m) }[/math], and
- [math]\displaystyle{ w }[/math] prefers [math]\displaystyle{ m }[/math] to [math]\displaystyle{ \phi(w) }[/math].
- A matching [math]\displaystyle{ \phi }[/math] is stable if there is no blocking pair in it.
- A pair [math]\displaystyle{ (m,w) }[/math] of a man and woman is a blocking pair in a matching [math]\displaystyle{ \phi }[/math] if [math]\displaystyle{ m }[/math] and [math]\displaystyle{ w }[/math] are not partners in [math]\displaystyle{ \phi }[/math] but
It is unclear from the definition itself whether stable matchings always exist, and how to efficiently find a stable matching. Both questions are answered by the following proposal algorithm due to Gale and Shapley.
The proposal algorithm (Gale-Shapley 1962) - Initially, all person are not married;
- in each step (called a proposal):
- an arbitrary unmarried man [math]\displaystyle{ m }[/math] proposes to the woman [math]\displaystyle{ w }[/math] who is ranked highest in his preference list [math]\displaystyle{ p_m }[/math] among all the women who has not yet rejected [math]\displaystyle{ m }[/math];
- if [math]\displaystyle{ w }[/math] is still single then [math]\displaystyle{ w }[/math] accepts the proposal and is married to [math]\displaystyle{ m }[/math];
- if [math]\displaystyle{ w }[/math] is married to another man [math]\displaystyle{ m' }[/math] who is ranked lower than [math]\displaystyle{ m }[/math] in her preference list [math]\displaystyle{ p_w }[/math] then [math]\displaystyle{ w }[/math] divorces [math]\displaystyle{ m' }[/math] (thus [math]\displaystyle{ m' }[/math] becomes single again and considers himself as rejected by [math]\displaystyle{ w }[/math]) and is married to [math]\displaystyle{ m }[/math];
- if otherwise [math]\displaystyle{ w }[/math] rejects [math]\displaystyle{ m }[/math];
The algorithm terminates when the last single woman receives a proposal. Since for every pair [math]\displaystyle{ (m,w)\in M\times W }[/math] of man and woman, [math]\displaystyle{ m }[/math] proposes to [math]\displaystyle{ w }[/math] at most once. The algorithm terminates in at most [math]\displaystyle{ n^2 }[/math] proposals in the worst case.
It is obvious to see that the algorithm retruns a macthing, and this matching must be stable. To see this, by contradiction suppose that the algorithm resturns a macthing [math]\displaystyle{ \phi }[/math], such that two men [math]\displaystyle{ A, B }[/math] are macthed to two women [math]\displaystyle{ a,b }[/math] in [math]\displaystyle{ \phi }[/math] respectively, but [math]\displaystyle{ A }[/math] and [math]\displaystyle{ b }[/math] prefers each other to their partners [math]\displaystyle{ a }[/math] and [math]\displaystyle{ B }[/math] respectively. By definition of the algorithm, [math]\displaystyle{ A }[/math] would have proposed to [math]\displaystyle{ b }[/math] before proposing to [math]\displaystyle{ a }[/math], by which time [math]\displaystyle{ b }[/math] must either be single or be matched to a man ranked lower than [math]\displaystyle{ A }[/math] in her list (because her final partner [math]\displaystyle{ B }[/math] is ranked lower than [math]\displaystyle{ A }[/math]), which means [math]\displaystyle{ b }[/math] must have accepted [math]\displaystyle{ A }[/math]'s proposal, a contradiction.
We are interested in the average-case performance of this algorithm, that is, the expected number of proposals if everyone's preference list is a uniformly and independently random permutation.
The following principle of deferred decisions is quite useful in analysing performance of algorithm with random input.
Principle of deferred decisions - The decision of random choice in the random input can be deferred to the running time of the algorithm.
Apply the principle of deferred decisions, the deterministic proposal algorithm with random permutations as input is equivalent to the following random process:
- At each step, a man [math]\displaystyle{ m }[/math] choose a woman [math]\displaystyle{ w }[/math] uniformly and independently at random to propose, among all the women who have not rejected him yet. (sample without replacement)
We then compare the above process with the following modified process:
- The man [math]\displaystyle{ m }[/math] repeatedly samples a uniform and independent woman to propose among all women, until he successfully samples a woman who has not rejected him and propose to her. (sample with replacement)
It is easy to see that the modified process (sample with replacement) is no more efficient than the original process (sample without replacement) because it simulates the original process if at each step we only count the last proposal to the woman who has not rejected the man. Such comparison of two random processes by forcing them to be related in some way is called coupling.
Note that in the modified process (sample with replacement), each proposal, no matter from which man, is going to a uniformly and independently random women. And we know that the algorithm terminated once the last single woman receives a proposal, i.e. once all [math]\displaystyle{ n }[/math] women have received at least one proposal. This is the coupon collector problem with proposals as balls (cookie boxes) and women as bins (coupons). Due to our analysis of the coupon collector problem, the expected number of proposals is bounded by [math]\displaystyle{ O(n\ln n) }[/math].
Occupancy Problem
Now we ask about the loads of bins. Assuming that [math]\displaystyle{ m }[/math] balls are uniformly and independently assigned to [math]\displaystyle{ n }[/math] bins, for [math]\displaystyle{ 1\le i\le n }[/math], let [math]\displaystyle{ X_i }[/math] be the load of the [math]\displaystyle{ i }[/math]th bin, i.e. the number of balls in the [math]\displaystyle{ i }[/math]th bin.
An easy analysis shows that for every bin [math]\displaystyle{ i }[/math], the expected load [math]\displaystyle{ \mathbf{E}[X_i] }[/math] is equal to the average load [math]\displaystyle{ m/n }[/math].
Because there are totally [math]\displaystyle{ m }[/math] balls, it is always true that [math]\displaystyle{ \sum_{i=1}^n X_i=m }[/math].
Therefore, due to the linearity of expectations,
- [math]\displaystyle{ \begin{align} \sum_{i=1}^n\mathbf{E}[X_i] &= \mathbf{E}\left[\sum_{i=1}^n X_i\right] = \mathbf{E}\left[m\right] =m. \end{align} }[/math]
Because for each ball, the bin to which the ball is assigned is uniformly and independently chosen, the distributions of the loads of bins are identical. Thus [math]\displaystyle{ \mathbf{E}[X_i] }[/math] is the same for each [math]\displaystyle{ i }[/math]. Combining with the above equation, it holds that for every [math]\displaystyle{ 1\le i\le m }[/math], [math]\displaystyle{ \mathbf{E}[X_i]=\frac{m}{n} }[/math]. So the average is indeed the average!
Next we analyze the distribution of the maximum load. We show that when [math]\displaystyle{ m=n }[/math], i.e. [math]\displaystyle{ n }[/math] balls are uniformly and independently thrown into [math]\displaystyle{ n }[/math] bins, the maximum load is [math]\displaystyle{ O\left(\frac{\log n}{\log\log n}\right) }[/math] with high probability.
Theorem - Suppose that [math]\displaystyle{ n }[/math] balls are thrown independently and uniformly at random into [math]\displaystyle{ n }[/math] bins. For [math]\displaystyle{ 1\le i\le n }[/math], let [math]\displaystyle{ X_i }[/math] be the random variable denoting the number of balls in the [math]\displaystyle{ i }[/math]th bin. Then
- [math]\displaystyle{ \Pr\left[\max_{1\le i\le n}X_i \ge\frac{3\ln n}{\ln\ln n}\right] \lt \frac{1}{n}. }[/math]
- Suppose that [math]\displaystyle{ n }[/math] balls are thrown independently and uniformly at random into [math]\displaystyle{ n }[/math] bins. For [math]\displaystyle{ 1\le i\le n }[/math], let [math]\displaystyle{ X_i }[/math] be the random variable denoting the number of balls in the [math]\displaystyle{ i }[/math]th bin. Then
Proof. Let [math]\displaystyle{ M }[/math] be an integer. Take bin 1. For any particular [math]\displaystyle{ M }[/math] balls, these [math]\displaystyle{ M }[/math] balls are all thrown to bin 1 with probability [math]\displaystyle{ (1/n)^M }[/math], and there are totally [math]\displaystyle{ {n\choose M} }[/math] distinct sets of [math]\displaystyle{ M }[/math] balls. Therefore, applying the union bound, - [math]\displaystyle{ \begin{align}\Pr\left[X_1\ge M\right] &\le {n\choose M}\left(\frac{1}{n}\right)^M\\ &= \frac{n!}{M!(n-M)!n^M}\\ &= \frac{1}{M!}\cdot\frac{n(n-1)(n-2)\cdots(n-M+1)}{n^M}\\ &= \frac{1}{M!}\cdot \prod_{i=0}^{M-1}\left(1-\frac{i}{n}\right)\\ &\le \frac{1}{M!}. \end{align} }[/math]
According to Stirling's approximation, [math]\displaystyle{ M!\approx \sqrt{2\pi M}\left(\frac{M}{e}\right)^M }[/math], thus
- [math]\displaystyle{ \frac{1}{M!}\le\left(\frac{e}{M}\right)^M. }[/math]
Due to the symmetry. All [math]\displaystyle{ X_i }[/math] have the same distribution. Apply the union bound again,
- [math]\displaystyle{ \begin{align} \Pr\left[\max_{1\le i\le n}X_i\ge M\right] &= \Pr\left[(X_1\ge M) \vee (X_2\ge M) \vee\cdots\vee (X_n\ge M)\right]\\ &\le n\Pr[X_1\ge M]\\ &\le n\left(\frac{e}{M}\right)^M. \end{align} }[/math]
When [math]\displaystyle{ M=3\ln n/\ln\ln n }[/math],
- [math]\displaystyle{ \begin{align} \left(\frac{e}{M}\right)^M &= \left(\frac{e\ln\ln n}{3\ln n}\right)^{3\ln n/\ln\ln n}\\ &\lt \left(\frac{\ln\ln n}{\ln n}\right)^{3\ln n/\ln\ln n}\\ &= e^{3(\ln\ln\ln n-\ln\ln n)\ln n/\ln\ln n}\\ &= e^{-3\ln n+3\ln\ln\ln n\ln n/\ln\ln n}\\ &\le e^{-2\ln n}\\ &= \frac{1}{n^2}. \end{align} }[/math]
Therefore,
- [math]\displaystyle{ \begin{align} \Pr\left[\max_{1\le i\le n}X_i\ge \frac{3\ln n}{\ln\ln n}\right] &\le n\left(\frac{e}{M}\right)^M &\lt \frac{1}{n}. \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
When [math]\displaystyle{ m\gt n }[/math], Figure 1 illustrates the results of several random experiments, which show that the distribution of the loads of bins becomes more even as the number of balls grows larger than the number of bins.
Formally, it can be proved that for [math]\displaystyle{ m=\Omega(n\log n) }[/math], with high probability, the maximum load is within [math]\displaystyle{ O\left(\frac{m}{n}\right) }[/math], which is asymptotically equal to the average load.
Universal Hashing
Hashing is one of the oldest tools in Computer Science. Knuth's memorandum in 1963 on analysis of hash tables is now considered to be the birth of the area of analysis of algorithms.
- Knuth. Notes on "open" addressing, July 22 1963. Unpublished memorandum.
The idea of hashing is simple: an unknown set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] data items (or keys) are drawn from a large universe [math]\displaystyle{ U=[N] }[/math] where [math]\displaystyle{ N\gg n }[/math]; in order to store [math]\displaystyle{ S }[/math] in a table of [math]\displaystyle{ M }[/math] entries (slots), we assume a consistent mapping (called a hash function) from the universe [math]\displaystyle{ U }[/math] to a small range [math]\displaystyle{ [M] }[/math].
This idea seems clever: we use a consistent mapping to deal with an arbitrary unknown data set. However, there is a fundamental flaw for hashing.
- For sufficiently large universe ([math]\displaystyle{ N\gt M(n-1) }[/math]), for any function, there exists a bad data set [math]\displaystyle{ S }[/math], such that all items in [math]\displaystyle{ S }[/math] are mapped to the same entry in the table.
A simple use of pigeonhole principle can prove the above statement.
To overcome this situation, randomization is introduced into hashing. We assume that the hash function is a random mapping from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math]. In order to ease the analysis, the following ideal assumption is used:
Simple Uniform Hash Assumption (SUHA or UHA, a.k.a. the random oracle model):
- A uniform random function [math]\displaystyle{ h:[N]\rightarrow[M] }[/math] is available and the computation of [math]\displaystyle{ h }[/math] is efficient.
Families of universal hash functions
The assumption of completely random function simplifies the analysis. However, in practice, truly uniform random hash function is extremely expensive to compute and store. Thus, this simple assumption can hardly represent the reality.
There are two approaches for implementing practical hash functions. One is to use ad hoc implementations and wish they may work. The other approach is to construct class of hash functions which are efficient to compute and store but with weaker randomness guarantees, and then analyze the applications of hash functions based on this weaker assumption of randomness.
This route was took by Carter and Wegman in 1977 while they introduced universal families of hash functions.
Definition (universal hash families) - Let [math]\displaystyle{ [N] }[/math] be a universe with [math]\displaystyle{ N\ge M }[/math]. A family of hash functions [math]\displaystyle{ \mathcal{H} }[/math] from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] is said to be [math]\displaystyle{ k }[/math]-universal if, for any items [math]\displaystyle{ x_1,x_2,\ldots,x_k\in [N] }[/math] and for a hash function [math]\displaystyle{ h }[/math] chosen uniformly at random from [math]\displaystyle{ \mathcal{H} }[/math], we have
- [math]\displaystyle{ \Pr[h(x_1)=h(x_2)=\cdots=h(x_k)]\le\frac{1}{M^{k-1}}. }[/math]
- A family of hash functions [math]\displaystyle{ \mathcal{H} }[/math] from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] is said to be strongly [math]\displaystyle{ k }[/math]-universal if, for any items [math]\displaystyle{ x_1,x_2,\ldots,x_k\in [N] }[/math], any values [math]\displaystyle{ y_1,y_2,\ldots,y_k\in[M] }[/math], and for a hash function [math]\displaystyle{ h }[/math] chosen uniformly at random from [math]\displaystyle{ \mathcal{H} }[/math], we have
- [math]\displaystyle{ \Pr[h(x_1)=y_1\wedge h(x_2)=y_2 \wedge \cdots \wedge h(x_k)=y_k]=\frac{1}{M^{k}}. }[/math]
- Let [math]\displaystyle{ [N] }[/math] be a universe with [math]\displaystyle{ N\ge M }[/math]. A family of hash functions [math]\displaystyle{ \mathcal{H} }[/math] from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] is said to be [math]\displaystyle{ k }[/math]-universal if, for any items [math]\displaystyle{ x_1,x_2,\ldots,x_k\in [N] }[/math] and for a hash function [math]\displaystyle{ h }[/math] chosen uniformly at random from [math]\displaystyle{ \mathcal{H} }[/math], we have
In particular, for a 2-universal family [math]\displaystyle{ \mathcal{H} }[/math], for any elements [math]\displaystyle{ x_1,x_2\in[N] }[/math], a uniform random [math]\displaystyle{ h\in\mathcal{H} }[/math] has
- [math]\displaystyle{ \Pr[h(x_1)=h(x_2)]\le\frac{1}{M}. }[/math]
For a strongly 2-universal family [math]\displaystyle{ \mathcal{H} }[/math], for any elements [math]\displaystyle{ x_1,x_2\in[N] }[/math] and any values [math]\displaystyle{ y_1,y_2\in[M] }[/math], a uniform random [math]\displaystyle{ h\in\mathcal{H} }[/math] has
- [math]\displaystyle{ \Pr[h(x_1)=y_1\wedge h(x_2)=y_2]=\frac{1}{M^2}. }[/math]
This behavior is exactly the same as uniform random hash functions on any pair of inputs. For this reason, a strongly 2-universal hash family are also called pairwise independent hash functions.
2-universal hash families
The construction of pairwise independent random variables via modulo a prime introduced in Section 1 already provides a way of constructing a strongly 2-universal hash family.
Let [math]\displaystyle{ p }[/math] be a prime. The function [math]\displaystyle{ h_{a,b}:[p]\rightarrow [p] }[/math] is defined by
- [math]\displaystyle{ h_{a,b}(x)=(ax+b)\bmod p, }[/math]
and the family is
- [math]\displaystyle{ \mathcal{H}=\{h_{a,b}\mid a,b\in[p]\}. }[/math]
Lemma - [math]\displaystyle{ \mathcal{H} }[/math] is strongly 2-universal.
Proof. In Section 1, we have proved the pairwise independence of the sequence of [math]\displaystyle{ (a i+b)\bmod p }[/math], for [math]\displaystyle{ i=0,1,\ldots, p-1 }[/math], which directly implies that [math]\displaystyle{ \mathcal{H} }[/math] is strongly 2-universal.
- [math]\displaystyle{ \square }[/math]
- The original construction of Carter-Wegman
What if we want to have hash functions from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] for non-prime [math]\displaystyle{ N }[/math] and [math]\displaystyle{ M }[/math]? Carter and Wegman developed the following method.
Suppose that the universe is [math]\displaystyle{ [N] }[/math], and the functions map [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math], where [math]\displaystyle{ N\ge M }[/math]. For some prime [math]\displaystyle{ p\ge N }[/math], let
- [math]\displaystyle{ h_{a,b}(x)=((ax+b)\bmod p)\bmod M, }[/math]
and the family
- [math]\displaystyle{ \mathcal{H}=\{h_{a,b}\mid 1\le a\le p-1, b\in[p]\}. }[/math]
Note that unlike the first construction, now [math]\displaystyle{ a\neq 0 }[/math].
Lemma (Carter-Wegman) - [math]\displaystyle{ \mathcal{H} }[/math] is 2-universal.
Proof. Due to the definition of [math]\displaystyle{ \mathcal{H} }[/math], there are [math]\displaystyle{ p(p-1) }[/math] many different hash functions in [math]\displaystyle{ \mathcal{H} }[/math], because each hash function in [math]\displaystyle{ \mathcal{H} }[/math] corresponds to a pair of [math]\displaystyle{ 1\le a\le p-1 }[/math] and [math]\displaystyle{ b\in[p] }[/math]. We only need to count for any particular pair of [math]\displaystyle{ x_1,x_2\in[N] }[/math] that [math]\displaystyle{ x_1\neq x_2 }[/math], the number of hash functions that [math]\displaystyle{ h(x_1)=h(x_2) }[/math]. We first note that for any [math]\displaystyle{ x_1\neq x_2 }[/math], [math]\displaystyle{ a x_1+b\not\equiv a x_2+b \pmod p }[/math]. This is because [math]\displaystyle{ a x_1+b\equiv a x_2+b \pmod p }[/math] would imply that [math]\displaystyle{ a(x_1-x_2)\equiv 0\pmod p }[/math], which can never happen since [math]\displaystyle{ 1\le a\le p-1 }[/math] and [math]\displaystyle{ x_1\neq x_2 }[/math] (note that [math]\displaystyle{ x_1,x_2\in[N] }[/math] for an [math]\displaystyle{ N\le p }[/math]). Therefore, we can assume that [math]\displaystyle{ (a x_1+b)\bmod p=u }[/math] and [math]\displaystyle{ (a x_2+b)\bmod p=v }[/math] for [math]\displaystyle{ u\neq v }[/math].
By linear algebra (over finite field), for any [math]\displaystyle{ x_1,x_2\in[N] }[/math] that [math]\displaystyle{ x_1\neq x_2 }[/math], for any [math]\displaystyle{ u,v\in[p] }[/math] that [math]\displaystyle{ u\neq v }[/math], there is exact one solution to [math]\displaystyle{ (a,b) }[/math] satisfying:
- [math]\displaystyle{ \begin{cases} a x_1+b \equiv u \pmod p\\ a x_2+b \equiv v \pmod p. \end{cases} }[/math]
After modulo [math]\displaystyle{ M }[/math], every [math]\displaystyle{ u\in[p] }[/math] has at most [math]\displaystyle{ \lceil p/M\rceil -1 }[/math] many [math]\displaystyle{ v\in[p] }[/math] that [math]\displaystyle{ v\neq u }[/math] but [math]\displaystyle{ v\equiv u\pmod M }[/math]. Therefore, for every pair of [math]\displaystyle{ x_1,x_2\in[N] }[/math] that [math]\displaystyle{ x_1\neq x_2 }[/math], there exist at most [math]\displaystyle{ p(\lceil p/M\rceil -1)\le p(p-1)/M }[/math] pairs of [math]\displaystyle{ 1\le a\le p-1 }[/math] and [math]\displaystyle{ b\in[p] }[/math] such that [math]\displaystyle{ ((ax_1+b)\bmod p)\bmod M=((ax_2+b)\bmod p)\bmod M }[/math], which means there are at most [math]\displaystyle{ p(p-1)/M }[/math] many hash functions [math]\displaystyle{ h\in\mathcal{H} }[/math] having [math]\displaystyle{ h(x_1)=h(x_2) }[/math] for [math]\displaystyle{ x_1\neq x_2 }[/math]. For [math]\displaystyle{ h }[/math] uniformly chosen from [math]\displaystyle{ \mathcal{H} }[/math], for any [math]\displaystyle{ x_1\neq x_2 }[/math],
- [math]\displaystyle{ \Pr[h(x_1)=h(x_2)]\le \frac{p(p-1)/M}{p(p-1)}=\frac{1}{M}. }[/math]
We prove that [math]\displaystyle{ \mathcal{H} }[/math] is 2-universal.
- [math]\displaystyle{ \square }[/math]
- A construction used in practice
The main issue of Carter-Wegman construction is the efficiency. The mod operation is very slow, and has been so for more than 30 years.
The following construction is due to Dietzfelbinger et al. It was published in 1997 and has been practically used in various applications of universal hashing.
The family of hash functions is from [math]\displaystyle{ [2^u] }[/math] to [math]\displaystyle{ [2^v] }[/math]. With a binary representation, the functions map binary strings of length [math]\displaystyle{ u }[/math] to binary strings of length [math]\displaystyle{ v }[/math]. Let
- [math]\displaystyle{ h_{a}(x)=\left\lfloor\frac{a\cdot x\bmod 2^u}{2^{u-v}}\right\rfloor, }[/math]
and the family
- [math]\displaystyle{ \mathcal{H}=\{h_{a}\mid a\in[2^v]\mbox{ and }a\mbox{ is odd}\}. }[/math]
This family of hash functions does not exactly meet the requirement of 2-universal family. However, Dietzfelbinger et al proved that [math]\displaystyle{ \mathcal{H} }[/math] is close to a 2-universal family. Specifically, for any input values [math]\displaystyle{ x_1,x_2\in[2^u] }[/math], for a uniformly random [math]\displaystyle{ h\in\mathcal{H} }[/math],
- [math]\displaystyle{ \Pr[h(x_1)=h(x_2)]\le\frac{1}{2^{v-1}}. }[/math]
So [math]\displaystyle{ \mathcal{H} }[/math] is within an approximation ratio of 2 to being 2-universal. The proof uses the fact that odd numbers are relative prime to a power of 2.
The function is extremely simple to compute in c language. We exploit that C-multiplication (*) of unsigned u-bit numbers is done [math]\displaystyle{ \bmod 2^u }[/math], and have a one-line C-code for computing the hash function:
h_a(x) = (a*x)>>(u-v)
The bit-wise shifting is a lot faster than modular. It explains the popularity of this scheme in practice than the original Carter-Wegman construction.
Collision number
Consider a 2-universal family [math]\displaystyle{ \mathcal{H} }[/math] of hash functions from [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math]. Let [math]\displaystyle{ h }[/math] be a hash function chosen uniformly from [math]\displaystyle{ \mathcal{H} }[/math]. For a fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] distinct elements from [math]\displaystyle{ [N] }[/math], say [math]\displaystyle{ S=\{x_1,x_2,\ldots,x_n\} }[/math], the elements are mapped to the hash values [math]\displaystyle{ h(x_1), h(x_2), \ldots, h(x_n) }[/math]. This can be seen as throwing [math]\displaystyle{ n }[/math] balls to [math]\displaystyle{ M }[/math] bins, with pairwise independent choices of bins.
As in the balls-into-bins with full independence, we are curious about the questions such as the birthday problem or the maximum load. These questions are interesting not only because they are natural to ask in a balls-into-bins setting, but in the context of hashing, they are closely related to the performance of hash functions.
The old techniques for analyzing balls-into-bins rely too much on the independence of the choice of the bin for each ball, therefore can hardly be extended to the setting of 2-universal hash families. However, it turns out several balls-into-bins questions can somehow be answered by analyzing a very natural quantity: the number of collision pairs.
A collision pair for hashing is a pair of elements [math]\displaystyle{ x_1,x_2\in S }[/math] which are mapped to the same hash value, i.e. [math]\displaystyle{ h(x_1)=h(x_2) }[/math]. Formally, for a fixed set of elements [math]\displaystyle{ S=\{x_1,x_2,\ldots,x_n\} }[/math], for any [math]\displaystyle{ 1\le i,j\le n }[/math], let the random variable
- [math]\displaystyle{ X_{ij} = \begin{cases} 1 & \text{if }h(x_i)=h(x_j),\\ 0 & \text{otherwise.} \end{cases} }[/math]
The total number of collision pairs among the [math]\displaystyle{ n }[/math] items [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] is
- [math]\displaystyle{ X=\sum_{i\lt j} X_{ij}.\, }[/math]
Since [math]\displaystyle{ \mathcal{H} }[/math] is 2-universal, for any [math]\displaystyle{ i\neq j }[/math],
- [math]\displaystyle{ \Pr[X_{ij}=1]=\Pr[h(x_i)=h(x_j)]\le\frac{1}{M}. }[/math]
The expected number of collision pairs is
- [math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}\left[\sum_{i\lt j}X_{ij}\right]=\sum_{i\lt j}\mathbf{E}[X_{ij}]=\sum_{i\lt j}\Pr[X_{ij}=1]\le{n\choose 2}\frac{1}{M}\lt \frac{n^2}{2M}. }[/math]
In particular, for [math]\displaystyle{ n=M }[/math], i.e. [math]\displaystyle{ n }[/math] items are mapped to [math]\displaystyle{ n }[/math] hash values by a pairwise independent hash function, the expected collision number is [math]\displaystyle{ \mathbf{E}[X]\lt \frac{n^2}{2M}=\frac{n}{2} }[/math].
Birthday problem
In the context of hash functions, the birthday problem ask for the probability that there is no collision at all. Since collision is something that we want to avoid in the applications of hash functions, we would like to lower bound the probability of zero-collision, i.e. to upper bound the probability that there exists a collision pair.
The above analysis gives us an estimation on the expected number of collision pairs, such that [math]\displaystyle{ \mathbf{E}[X]\lt \frac{n^2}{2M} }[/math]. Apply the Markov's inequality, for [math]\displaystyle{ 0\lt \epsilon\lt 1 }[/math], we have
- [math]\displaystyle{ \Pr\left[X\ge \frac{n^2}{2\epsilon M}\right]\le\Pr\left[X\ge \frac{1}{\epsilon}\mathbf{E}[X]\right]\le\epsilon. }[/math]
When [math]\displaystyle{ n\le\sqrt{2\epsilon M} }[/math], the number of collision pairs is [math]\displaystyle{ X\ge1 }[/math] with probability at most [math]\displaystyle{ \epsilon }[/math], therefore with probability at least [math]\displaystyle{ 1-\epsilon }[/math], there is no collision at all. Therefore, we have the following theorem.
Theorem - If [math]\displaystyle{ h }[/math] is chosen uniformly from a 2-universal family of hash functions mapping the universe [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] where [math]\displaystyle{ N\ge M }[/math], then for any set [math]\displaystyle{ S\subset [N] }[/math] of [math]\displaystyle{ n }[/math] items, where [math]\displaystyle{ n\le\sqrt{2\epsilon M} }[/math], the probability that there exits a collision pair is
- [math]\displaystyle{ \Pr[\mbox{collision occurs}]\le\epsilon. }[/math]
- If [math]\displaystyle{ h }[/math] is chosen uniformly from a 2-universal family of hash functions mapping the universe [math]\displaystyle{ [N] }[/math] to [math]\displaystyle{ [M] }[/math] where [math]\displaystyle{ N\ge M }[/math], then for any set [math]\displaystyle{ S\subset [N] }[/math] of [math]\displaystyle{ n }[/math] items, where [math]\displaystyle{ n\le\sqrt{2\epsilon M} }[/math], the probability that there exits a collision pair is
Recall that for mutually independent choices of bins, for some [math]\displaystyle{ n=\sqrt{2M\ln(1/\epsilon)} }[/math], the probability that a collision occurs is about [math]\displaystyle{ \epsilon }[/math]. For constant [math]\displaystyle{ \epsilon }[/math], this gives an essentially same bound as the pairwise independent setting. Therefore, the behavior of pairwise independent hash function is essentially the same as the uniform random hash function for the birthday problem. This is easy to understand, because birthday problem is about the behavior of collisions, and the definition of 2-universal hash function can be interpreted as "functions that the probability of collision is as low as a uniform random function".
Perfect Hashing
Perfect hashing is a data structure for storing a static dictionary. In a static dictionary, a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] items from the universe [math]\displaystyle{ [N] }[/math] are preprocessed and stored in a table. Once the table is constructed, it will nit be changed any more, but will only be used for search operations: a search for an item gives the location of the item in the table or returns that the item is not in the table. You may think of an application that we store an encyclopedia in a DVD, so that searches are very efficient but there will be no updates to the data.
This problem can be solved by binary search on a sorted table or balanced search trees in [math]\displaystyle{ O(\log n) }[/math] time for a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements. We show how to solve this problem with [math]\displaystyle{ O(1) }[/math] time by perfect hashing.
Perfect hashing using quadratic space
The idea of perfect hashing is that we use a hash function [math]\displaystyle{ h }[/math] to map the [math]\displaystyle{ n }[/math] items to distinct entries of the table; store every item [math]\displaystyle{ x\in S }[/math] in the entry [math]\displaystyle{ h(x) }[/math]; and also store the hash function [math]\displaystyle{ h }[/math] in a fixed location in the table (usually the beginning of the table). The algorithm for searching for an item is as follows:
- search for [math]\displaystyle{ x }[/math] in table [math]\displaystyle{ T }[/math]:
- retrieve [math]\displaystyle{ h }[/math] from a fixed location in the table;
- if [math]\displaystyle{ x=T[h(x)] }[/math] return [math]\displaystyle{ h(x) }[/math]; else return NOT_FOUND;
This scheme works as long as that the hash function satisfies the following two conditions:
- The description of [math]\displaystyle{ h }[/math] is sufficiently short, so that [math]\displaystyle{ h }[/math] can be stored in one entry (or in constant many entries) of the table.
- [math]\displaystyle{ h }[/math] has no collisions on [math]\displaystyle{ S }[/math], i.e. there is no pair of items [math]\displaystyle{ x_1,x_2\in S }[/math] that are mapped to the same value by [math]\displaystyle{ h }[/math].
The first condition is easy to guarantee for 2-universal hash families. As shown by Carter-Wegman construction, a 2-universal hash function can be uniquely represented by two integers [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math], which can be stored in two entries (or just one, if the word length is sufficiently large) of the table.
Our discussion is now focused on the second condition. We find that it relies on the perfectness of the hash function for a data set [math]\displaystyle{ S }[/math].
A hash function [math]\displaystyle{ h }[/math] is perfect for a set [math]\displaystyle{ S }[/math] of items if [math]\displaystyle{ h }[/math] maps all items in [math]\displaystyle{ S }[/math] to different values, i.e. there is no collision.
We have shown by the birthday problem for 2-universal hashing that when [math]\displaystyle{ n }[/math] items are mapped to [math]\displaystyle{ n^2 }[/math] values, for an [math]\displaystyle{ h }[/math] chosen uniformly from a 2-universal family of hash functions, the probability that a collision occurs is at most 1/2. Thus
- [math]\displaystyle{ \Pr[h\mbox{ is perfect for }S]\ge\frac{1}{2} }[/math]
for a table of [math]\displaystyle{ n^2 }[/math] entries.
The construction of perfect hashing is straightforward then:
- For a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements:
- uniformly choose an [math]\displaystyle{ h }[/math] from a 2-universal family [math]\displaystyle{ \mathcal{H} }[/math]; (for Carter-Wegman's construction, it means uniformly choose two integer [math]\displaystyle{ 1\le a\le p-1 }[/math] and [math]\displaystyle{ b\in[p] }[/math] for a sufficiently large prime [math]\displaystyle{ p }[/math].)
- check whether [math]\displaystyle{ h }[/math] is perfect for [math]\displaystyle{ S }[/math];
- if [math]\displaystyle{ h }[/math] is NOT perfect for [math]\displaystyle{ S }[/math], start over again; otherwise, construct the table;
This is a Las Vegas randomized algorithm, which construct a perfect hashing for a fixed set [math]\displaystyle{ S }[/math] with expectedly at most two trials (due to geometric distribution). The resulting data structure is a [math]\displaystyle{ O(n^2) }[/math]-size static dictionary of [math]\displaystyle{ n }[/math] elements which answers every search in deterministic [math]\displaystyle{ O(1) }[/math] time.
FKS perfect hashing
In the last section we see how to use [math]\displaystyle{ O(n^2) }[/math] space and constant time for answering search in a set. Now we see how to do it with linear space and constant time. This solves the problem of searching asymptotically optimal for both time and space.
This was once seemingly impossible, until Yao's seminal paper:
- Yao. Should tables be sorted? Journal of the ACM (JACM), 1981.
Yao's paper shows a possibility of achieving linear space and constant time at the same time by exploiting the power of hashing, but assumes an unrealistically large universe.
Inspired by Yao's work, Fredman, Komlós, and Szemerédi discover the first linear-space and constant-time static dictionary in a realistic setting:
- Fredman, Komlós, and Szemerédi. Storing a sparse table with O(1) worst case access time. Journal of the ACM (JACM), 1984.
The idea of FKS hashing is to arrange hash table in two levels:
- In the first level, [math]\displaystyle{ n }[/math] items are hashed to [math]\displaystyle{ n }[/math] buckets by a 2-universal hash function [math]\displaystyle{ h }[/math].
- Let [math]\displaystyle{ B_i }[/math] be the set of items hashed to the [math]\displaystyle{ i }[/math]th bucket.
- In the second level, construct a [math]\displaystyle{ |B_i|^2 }[/math]-size perfect hashing for each bucket [math]\displaystyle{ B_i }[/math].
The data structure can be stored in a table. The first few entries are reserved to store the primary hash function [math]\displaystyle{ h }[/math]. To help the searching algorithm locate a bucket, we use the next [math]\displaystyle{ n }[/math] entries of the table as the "pointers" to the bucket: each entry stores the address of the first entry of the space to store a bucket. In the rest of table, the [math]\displaystyle{ n }[/math] buckets are stored in order, each using a [math]\displaystyle{ |B_i|^2 }[/math] space as required by perfect hashing.
It is easy to see that the search time is constant. To search for an item [math]\displaystyle{ x }[/math], the algorithm does the followings:
- Retrieve [math]\displaystyle{ h }[/math].
- Retrieve the address for bucket [math]\displaystyle{ h(x) }[/math].
- Search by perfect hashing within bucket [math]\displaystyle{ h(x) }[/math].
Each line takes constant time. So the worst-case search time is constant.
We then need to guarantee that the space is linear to [math]\displaystyle{ n }[/math]. At the first glance, this seems impossible because each instance of perfect hashing for a bucket costs a square-size of space. We will prove that although the individual buckets use square-sized spaces, the sum of the them is still linear.
For a fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] items, for a hash function [math]\displaystyle{ h }[/math] chosen uniformly from a 2-universe family which maps the items to [math]\displaystyle{ [n] }[/math], called [math]\displaystyle{ n }[/math] buckets, let [math]\displaystyle{ Y_i=|B_i| }[/math] be the number of items in [math]\displaystyle{ S }[/math] mapped to the [math]\displaystyle{ i }[/math]th bucket. We are going to bound the following quantity:
- [math]\displaystyle{ Y=\sum_{i=1}^n Y_i^2. }[/math]
Since each bucket [math]\displaystyle{ B_i }[/math] use a space of [math]\displaystyle{ Y_i^2 }[/math] for perfect hashing. [math]\displaystyle{ Y }[/math] gives the size of the space for storing the buckets.
We will show that [math]\displaystyle{ Y }[/math] is related to the total number of collision pairs. (Indeed, the number of collision pairs can be computed by a degree-2 polynomial, just like [math]\displaystyle{ Y }[/math].)
Note that a bucket of [math]\displaystyle{ Y_i }[/math] items contributes [math]\displaystyle{ {Y_i\choose 2} }[/math] collision pairs. Let [math]\displaystyle{ X }[/math] be the total number of collision pairs. [math]\displaystyle{ X }[/math] can be computed by summing over the collision pairs in every bucket:
- [math]\displaystyle{ X=\sum_{i=1}^n{Y_i\choose 2}=\sum_{i=1}^n\frac{Y_i(Y_i-1)}{2}=\frac{1}{2}\left(\sum_{i=1}^nY_i^2-\sum_{i=1}^nY_i\right)=\frac{1}{2}\left(\sum_{i=1}^nY_i^2-n\right). }[/math]
Therefore, the sum of squares of the sizes of buckets is related to collision number by:
- [math]\displaystyle{ \sum_{i=1}^nY_i^2=2X+n. }[/math]
By our analysis of the collision number, we know that for [math]\displaystyle{ n }[/math] items mapped to [math]\displaystyle{ n }[/math] buckets, the expected number of collision pairs is: [math]\displaystyle{ \mathbf{E}[X]\le \frac{n}{2} }[/math]. Thus,
- [math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^nY_i^2\right]=\mathbf{E}[2X+n]\le 2n. }[/math]
Due to Markov's inequality, [math]\displaystyle{ \sum_{i=1}^nY_i^2=O(n) }[/math] with a constant probability. For any set [math]\displaystyle{ S }[/math], we can find a suitable [math]\displaystyle{ h }[/math] after expected constant number of trials, and FKS can be constructed with guaranteed (instead of expected) linear-size which answers each search in constant time.
Distinct Elements
Consider the following problem of counting distinct elements: Suppose that [math]\displaystyle{ \Omega }[/math] is a sufficiently large universe.
- Input: a sequence of (not necessarily distinct) elements [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math];
- Output: an estimation of the total number of distinct elements [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math].
A straightforward way of solving this problem is to maintain a dictionary data structure, which costs at least linear ([math]\displaystyle{ O(n) }[/math]) space. For big data, where [math]\displaystyle{ n }[/math] is very large, this is still too expensive. However, due to an information-theoretical argument, linear space is necessary if you want to compute the exact value of [math]\displaystyle{ z }[/math].
Our goal is to relax the problem a little bit to significantly reduce the space cost by tolerating approximate answers. The form of approximation we consider is [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator.
[math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator - A random variable [math]\displaystyle{ \widehat{Z} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of a quantity [math]\displaystyle{ z }[/math] if
- [math]\displaystyle{ \Pr[\,(1-\epsilon)z\le \widehat{Z}\le (1+\epsilon)z\,]\ge 1-\delta }[/math].
- [math]\displaystyle{ \widehat{Z} }[/math] is said to be an unbiased estimator of [math]\displaystyle{ z }[/math] if [math]\displaystyle{ \mathbb{E}[\widehat{Z}]=z }[/math].
- A random variable [math]\displaystyle{ \widehat{Z} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of a quantity [math]\displaystyle{ z }[/math] if
Usually [math]\displaystyle{ \epsilon }[/math] is called approximation error and [math]\displaystyle{ \delta }[/math] is called confidence error.
We now present an elegant algorithm introduced by Flajolet and Martin in 1984. The algorithm can be implemented in data stream model: The input elements [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] is presented to the algorithm one at a time, where the size of data [math]\displaystyle{ n }[/math] is unknown to the algorithm. The algorithm maintains a value [math]\displaystyle{ \widehat{Z} }[/math] which is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the total number of distinct elements [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math], using only a small amount of memory space to memorize (with loss) the data set [math]\displaystyle{ \{x_1,x_2,\ldots,x_n\} }[/math].
A famous quotation of Flajolet describes the performance of this algorithm as:
"Using only memory equivalent to 5 lines of printed text, you can estimate with a typical accuracy of 5% and in a single pass the total vocabulary of Shakespeare."
An estimator by hashing
Suppose that we can access to an idealized random hash function [math]\displaystyle{ h:\Omega\to[0,1] }[/math] which is uniformly distributed over all mappings from the universe [math]\displaystyle{ \Omega }[/math] to unit interval [math]\displaystyle{ [0,1] }[/math].
Recall that the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] consists of [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math] distinct elements. These elements are mapped by the random function [math]\displaystyle{ h }[/math] to [math]\displaystyle{ z }[/math] hash values uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math]. We could maintain these hash values instead of the original elements, but this would still be too expensive because in the worst case we still have up to [math]\displaystyle{ n }[/math] distinct values to maintain. However, due to the idealized random hash function, the unit interval [math]\displaystyle{ [0,1] }[/math] will be partitioned into [math]\displaystyle{ z+1 }[/math] subintervals by these [math]\displaystyle{ z }[/math] uniform and independent hash values. The typical length of the subinterval gives an estimation of the number [math]\displaystyle{ z }[/math].
Proposition - [math]\displaystyle{ \mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\frac{1}{z+1} }[/math].
Proof. The input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] consisting of [math]\displaystyle{ z }[/math] distinct elements are mapped to [math]\displaystyle{ z }[/math] random hash values uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math]. These [math]\displaystyle{ z }[/math] hash values partition the unit interval [math]\displaystyle{ [0,1] }[/math] into [math]\displaystyle{ z+1 }[/math] subintervals [math]\displaystyle{ [0,v_1],[v_1,v_2],[v_2,v_3]\ldots,[v_{z-1},v_z],[v_z,1] }[/math], where [math]\displaystyle{ v_i }[/math] denotes the [math]\displaystyle{ i }[/math]-th smallest value among all hash values [math]\displaystyle{ \{h(x_1),h(x_2),\ldots,h(x_n)\} }[/math]. Clearly we have
- [math]\displaystyle{ v_1=\min_{1\le i\le n}h(x_i) }[/math].
Meanwhile, since all hash values are uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math], the lengths of all subintervals [math]\displaystyle{ v_1, v_2-v_1, v_3-v_2,\ldots, v_z-v_{z-1}, 1-v_z }[/math] are identically distributed. By symmetry, they have the same expectation, therefore
- [math]\displaystyle{ (z+1)\mathbb{E}[v_1]= \mathbb{E}[v_1]+\sum_{i=1}^{z-1}\mathbb{E}[v_{i+1}-v_i]+\mathbb{E}[1-v_z] =\mathbb{E}\left[v_1+(v_2-v_1)+(v_3-v_2)+\cdots+(v_{z}-v_{z-1})+1-v_z\right] =1, }[/math]
which implies that
- [math]\displaystyle{ \mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\mathbb{E}[v_1]=\frac{1}{z+1} }[/math].
- [math]\displaystyle{ \square }[/math]
The quantity [math]\displaystyle{ \min_{1\le i\le n}h(x_i) }[/math] can be computed with small space cost (for storing the current smallest hash value) by scan the input sequence in a single pass. Because as we proved its expectation is [math]\displaystyle{ \frac{1}{z+1} }[/math], the smallest hash value [math]\displaystyle{ Y=\min_{1\le i\le n}h(x_i) }[/math] gives an unbiased estimator for [math]\displaystyle{ \frac{1}{z+1} }[/math]. However, [math]\displaystyle{ \frac{1}{Y}-1 }[/math] is not necessarily a good estimator for [math]\displaystyle{ z }[/math]. Actually, it is a rather poor estimator. Consider for example when [math]\displaystyle{ z=1 }[/math], all input elements are the same. In this case, there is only one hash value and [math]\displaystyle{ Y=\min_{1\le i\le n}h(x_i) }[/math] is distributed uniformly over [math]\displaystyle{ [0,1] }[/math], thus [math]\displaystyle{ \frac{1}{Y}-1 }[/math] fails to be close enough to the correct answer 1 with high probability.
Flajolet-Martin algorithm
The reason that the above estimator of a single hash function performs poorly is that the unbiased estimator [math]\displaystyle{ \min_{1\le i\le n}h(x_i) }[/math] has large variance. So a natural way to reduce this variance is to have multiple independent hash functions and take the average. This is precisely what Flajolet-Martin algorithm does.
Suppose that we can access to [math]\displaystyle{ k }[/math] independent random hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math], where each [math]\displaystyle{ h_j:\Omega\to[0,1] }[/math] is uniformly and independently distributed over all functions mapping [math]\displaystyle{ \Omega }[/math] to [math]\displaystyle{ [0,1] }[/math]. Here [math]\displaystyle{ k }[/math] is a parameter to be fixed by the desired approximation error [math]\displaystyle{ \epsilon }[/math] and confidence error [math]\displaystyle{ \delta }[/math]. The Flajolet-Martin algorithm is given by the following pseudocode.
Flajolet-Martin algorithm (Flajolet and Martin 1984) - Suppose that [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[0,1] }[/math] are [math]\displaystyle{ k }[/math] uniform and independent random hash functions, where [math]\displaystyle{ k }[/math] is a parameter to be fixed later.
- Scan the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] in a single pass to compute:
- [math]\displaystyle{ Y_j=\min_{1\le i\le n}h_j(x_i) }[/math] for every [math]\displaystyle{ j=1,2,\ldots,k }[/math];
- average value [math]\displaystyle{ \overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j }[/math];
- return [math]\displaystyle{ \widehat{Z}=\frac{1}{\overline{Y}}-1 }[/math] as the estimator.
The algorithm is easy to implement in data stream model, with a space cost of storing [math]\displaystyle{ k }[/math] hash values. The following theorem guarantees that the algorithm returns an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the total number of distinct elements for a suitable [math]\displaystyle{ k=O\left(\frac{1}{\epsilon^2\delta}\right) }[/math].
Theorem - For any [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math], if [math]\displaystyle{ k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil }[/math] then the output [math]\displaystyle{ \widehat{Z} }[/math] always gives an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the correct answer [math]\displaystyle{ z }[/math].
In the following we prove this main theorem for Flajolet-Martin algorithm.
An obstacle to analyze the estimator [math]\displaystyle{ \widehat{Z}=\frac{1}{\overline{Y}}-1 }[/math] is that it is a nonlinear function of [math]\displaystyle{ \overline{Y} }[/math] who is easier to analyze. Nevertheless, we observe that [math]\displaystyle{ \widehat{Z} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of [math]\displaystyle{ z }[/math] as long as [math]\displaystyle{ \overline{Y} }[/math] is an [math]\displaystyle{ (\epsilon/2,\delta) }[/math]-estimator of [math]\displaystyle{ \frac{1}{z+1} }[/math]. This can be deduced by just verifying the following:
- [math]\displaystyle{ \frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1} \implies (1-\epsilon)z\le\frac{1}{\overline{Y}}-1\le (1+\epsilon)z }[/math],
for [math]\displaystyle{ \epsilon\lt \frac{1}{2} }[/math]. Therefore,
- [math]\displaystyle{ \Pr\left[\,(1-\epsilon)z\le \widehat{Z} \le (1+\epsilon)z\,\right]\ge \Pr\left[\,\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1}\,\right] =\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right] }[/math].
It is then sufficient to show that [math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta }[/math] for proving the main theorem above. We will see that this is equivalent to show the concentration inequality
- [math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta\quad\qquad({\color{red}*}) }[/math].
Lemma - The followings hold for each [math]\displaystyle{ Y_j }[/math], [math]\displaystyle{ j=1,2\ldots,k }[/math], and [math]\displaystyle{ \overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j }[/math]:
- [math]\displaystyle{ \mathbb{E}\left[\overline{Y}\right]=\mathbb{E}\left[Y_j\right]=\frac{1}{z+1} }[/math];
- [math]\displaystyle{ \mathbf{Var}\left[Y_j\right]\le\frac{1}{(z+1)^2} }[/math], and consequently [math]\displaystyle{ \mathbf{Var}\left[\overline{Y}\right]\le\frac{1}{k(z+1)^2} }[/math].
- The followings hold for each [math]\displaystyle{ Y_j }[/math], [math]\displaystyle{ j=1,2\ldots,k }[/math], and [math]\displaystyle{ \overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j }[/math]:
Proof. As in the case of single hash function, by symmetry it holds that [math]\displaystyle{ \mathbb{E}[Y_j]=\frac{1}{z+1} }[/math] for every [math]\displaystyle{ j=1,2,\ldots,k }[/math]. Therefore,
- [math]\displaystyle{ \mathbb{E}\left[\overline{Y}\right]=\frac{1}{k}\sum_{j=1}^k\mathbb{E}[Y_j]=\frac{1}{z+1} }[/math].
Recall that each [math]\displaystyle{ Y_j }[/math] is the minimum of [math]\displaystyle{ z }[/math] random hash values uniformly and independently distributed over [math]\displaystyle{ [0,1] }[/math]. By geometry probability, it holds that for any [math]\displaystyle{ y\in[0,1] }[/math],
- [math]\displaystyle{ \Pr[Y_j\gt y]=(1-y)^z }[/math],
which means [math]\displaystyle{ \Pr[Y_j\le y]=1-(1-y)^z }[/math]. Taking the derivative with respect to [math]\displaystyle{ y }[/math], we obtain the probability density function of random variable [math]\displaystyle{ Y_j }[/math], which is [math]\displaystyle{ z(1-y)^{z-1} }[/math].
We then compute the second moment.
- [math]\displaystyle{ \mathbb{E}[Y_j^2]=\int^{1}_0y^2z(1-y)^{z-1}\,\mathrm{d}y=\frac{2}{(z+1)(z+2)} }[/math].
The variance is bounded as
- [math]\displaystyle{ \mathbf{Var}\left[Y_j\right]=\mathbb{E}\left[Y_j^2\right]-\mathbb{E}\left[Y_j\right]^2=\frac{2}{(z+1)(z+2)}-\frac{1}{(z+1)^2}\le\frac{1}{(z+1)^2} }[/math].
Due to the (pairwise) independence between [math]\displaystyle{ Y_j }[/math]'s,
- [math]\displaystyle{ \mathbf{Var}\left[\overline{Y}\right]=\mathbf{Var}\left[\frac{1}{k}\sum_{j=1}^kY_j\right]=\frac{1}{k^2}\sum_{j=1}^k\mathbf{Var}\left[Y_j\right]\le \frac{1}{k(z+1)^2} }[/math].
- [math]\displaystyle{ \square }[/math]
We resume to prove the inequality [math]\displaystyle{ ({\color{red}*}) }[/math]. By Chebyshev's inequality, it holds that
- [math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\gt \frac{\epsilon/2}{z+1}\,\right] \le\frac{4}{\epsilon^2}(z+1)^2\mathbf{Var}\left[\overline{Y}\right] \le\frac{4}{\epsilon^2k} }[/math].
When [math]\displaystyle{ k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil }[/math], this probability is at most [math]\displaystyle{ \delta }[/math]. The inequality [math]\displaystyle{ ({\color{red}*}) }[/math] is proved. As we discussed above, this proves the above main theorem for Flajolet-Martin algorithm.
Uniform Hash Assumption (UHA)
In above we assume we can access to idealized random hash functions [math]\displaystyle{ h:\Omega\to[0,1] }[/math] with real values. With a more careful calculation, one can show the same performance guarantee for hash functions with discrete values as [math]\displaystyle{ h:\Omega\to[M] }[/math] where [math]\displaystyle{ M=\mathrm{poly}(n) }[/math], that is, the hash values are strings of [math]\displaystyle{ O(\log n) }[/math] bits.
Even with such improved analysis, a uniform random discrete function in form of [math]\displaystyle{ h:[N]\to[M] }[/math] is not really efficient to store or to compute. By an information-theretical argument, it takes at least [math]\displaystyle{ \Omega(N\log M) }[/math] bits to represent such a random hash function because this is the entropy of such uniform random function.
For the convenience of analysis, it is common to assume the following Uniform Hash Assumption (UHA) also known as Simple Uniform Hash Assumption (SUHA).
Uniform Hash Assumption (UHA) - A uniform random function [math]\displaystyle{ h:[N]\rightarrow[M] }[/math] is available and the computation of [math]\displaystyle{ h }[/math] is efficient.
Set Membership
A basic question in Computer Science is:
- "[math]\displaystyle{ \mbox{Is }x\in S? }[/math]"
for a set [math]\displaystyle{ S }[/math] and an element [math]\displaystyle{ x }[/math]. This is the set membership problem.
Formally, given an arbitrary set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math], we want to use a succinct data structure to represent this set [math]\displaystyle{ S }[/math], so that upon each query of any element [math]\displaystyle{ x }[/math] from the universe [math]\displaystyle{ [N] }[/math], the question of whether [math]\displaystyle{ x\in S }[/math] is efficiently answered. The complexity of such data structure is measured in two-fold:
- space cost: size of the data structure to represent a set [math]\displaystyle{ S }[/math] of size [math]\displaystyle{ n }[/math];
- time cost: time complexity of answering each query by accessing to the data structure.
Suppose that the universe [math]\displaystyle{ \Omega }[/math] is of size [math]\displaystyle{ N }[/math]. Clearly, the membership problem can be solved by a dictionary data structure, e.g.:
- sorted table / balanced search tree: with space cost [math]\displaystyle{ O(n\log N) }[/math] bits and time cost [math]\displaystyle{ O(\log n) }[/math];
- perfect hashing of Fredman, Komlós & Szemerédi: with space cost [math]\displaystyle{ O(n\log N) }[/math] bits and time cost [math]\displaystyle{ O(1) }[/math].
Note that [math]\displaystyle{ \log{N\choose n}=\Theta\left(n\log \frac{N}{n}\right) }[/math] is the entropy of sets [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math] of size [math]\displaystyle{ N }[/math]. Therefore it is necessary to use so many bits to represent a set without losing any information. Nevertheless, we can do better than this if we use a loss representation of the input set [math]\displaystyle{ S }[/math] and tolerate a bounded error in answering queries. Such lossy representation of data is sometimes called a sketch.
Bloom filter
The Bloom filter is a space-efficient hash table that solves the approximate membership problem with one-sided error (false positive).
Given a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math], a Bloom filter consists of an array [math]\displaystyle{ A }[/math] of [math]\displaystyle{ cn }[/math] bits, and [math]\displaystyle{ k }[/math] hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math] map [math]\displaystyle{ \Omega }[/math] to [math]\displaystyle{ [cn] }[/math], where both [math]\displaystyle{ c }[/math] and [math]\displaystyle{ k }[/math] are parameters that we can try to optimize later.
As before, we assume the Uniform Hash Assumption (UHA): [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math] are mutually independent hash function where each [math]\displaystyle{ h_i }[/math] is a uniform random hash function [math]\displaystyle{ h_i:\Omega\to[cn] }[/math].
The Bloom filter works as follows:
Bloom filter (Bloom 1970) - Suppose [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[cn] }[/math] are uniform and independent random hash functions.
- Data structure construction: Given a set [math]\displaystyle{ S\subset\Omega }[/math] of size [math]\displaystyle{ n=|S| }[/math], the data structure is a Boolean array [math]\displaystyle{ A }[/math] of [math]\displaystyle{ cn }[/math] bits constructed as
- initialize all [math]\displaystyle{ cn }[/math] bits of the Boolean array [math]\displaystyle{ A }[/math] to 0;
- for each [math]\displaystyle{ x\in S }[/math], let [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math].
- Query resolution: Upon each query of an arbitrary [math]\displaystyle{ x\in\Omega }[/math],
- answer "yes" if [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math] and "no" if otherwise.
The Boolean array is our data structure, whose size is [math]\displaystyle{ cn }[/math] bits. With Uniform Hash Assumption (UHA), the time cost of the data structure for answering each query is [math]\displaystyle{ O(k) }[/math].
When the answer returned by the algorithm is "no", it holds that [math]\displaystyle{ A[h_i(x)]=0 }[/math] for some [math]\displaystyle{ 1\le i\le k }[/math], in which case the query [math]\displaystyle{ x }[/math] must not belong to the set [math]\displaystyle{ S }[/math]. Thus, the Bloom filter has no false negatives.
On the other hand, when the answer returned by the algorithm is "yes", [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math]. It is still possible for some [math]\displaystyle{ x\not\in S }[/math] that all bits [math]\displaystyle{ A[h_i(x)] }[/math] are set by elements in [math]\displaystyle{ S }[/math]. We want to bound such false positive, that is, the following probability for an [math]\displaystyle{ x\not\in S }[/math]:
- [math]\displaystyle{ \Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,] }[/math],
which by independence between different hash functions and by symmetry is equal to:
- [math]\displaystyle{ \Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k }[/math].
For an element [math]\displaystyle{ x\not\in S }[/math], its hash value [math]\displaystyle{ h_1(x) }[/math] is independent of all hash values [math]\displaystyle{ h_i(y) }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math] and all [math]\displaystyle{ y\in S }[/math]. This is due to the Uniform Hash Assumption. The hash value [math]\displaystyle{ h_1(x) }[/math] of [math]\displaystyle{ x\not\in S }[/math] is then independent of the content of the array [math]\displaystyle{ A }[/math]. Therefore, the probability of this position [math]\displaystyle{ A[h_1(x)] }[/math] missed by all [math]\displaystyle{ kn }[/math] updates to the Boolean array [math]\displaystyle{ A }[/math] caused by all [math]\displaystyle{ n }[/math] elements in [math]\displaystyle{ S }[/math] is:
- [math]\displaystyle{ \Pr[\, A[h_1(x)]=0\,]=\left(1-\frac{1}{cn}\right)^{kn}\approx e^{-k/c}. }[/math]
Putting everything together, for any [math]\displaystyle{ x\not\in S }[/math], the false positive is bounded as:
- [math]\displaystyle{ \begin{align} \Pr[\,\text{wrongly answer ''yes''}\,] &=\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]\\ &=\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k\\ &=\left(1-\left(1-\frac{1}{cn}\right)^{kn}\right)^k\\ &\approx \left(1- e^{-k/c}\right)^k \end{align} }[/math]
which is [math]\displaystyle{ (0.6185)^c }[/math] when [math]\displaystyle{ k=c\ln 2 }[/math].
Bloom filter solves the membership query with a small constant error of false positives using a data structure of [math]\displaystyle{ O(n) }[/math] bits which answers each query with [math]\displaystyle{ O(1) }[/math] time cost.
Frequency Estimation
Suppose that [math]\displaystyle{ \Omega }[/math] is the data universe. The frequency estimation problem is defined as follows.
- Data: a sequence of (not necessarily distinct) elements [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math];
- Query: an element [math]\displaystyle{ x\in\Omega }[/math];
- Output: an estimation [math]\displaystyle{ \hat{f}_x }[/math] of the frequency [math]\displaystyle{ f_x\triangleq|\{i\mid x_i=x\}| }[/math] of [math]\displaystyle{ x }[/math] in input data.
We still want to give an algorithm in the data stream model: the algorithm scan the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] to construct a succinct data structure, such that upon each query of [math]\displaystyle{ x\in\Omega }[/math], the algorithm returns an estimation of the frequency [math]\displaystyle{ f_x }[/math].
Clearly this problem can always be solved by storing all appeared distinct elements along with their frequencies. However, the space cost of this straightforward solution is rather high. Instead, we want to use a lossy representation (a sketch) of input data which uses significantly less space but can still answer queries with tolarable accuracy.
Formally, upon each query of [math]\displaystyle{ x\in\Omega }[/math], the algorithm should return an answer [math]\displaystyle{ \hat{f}_x }[/math] satisfying:
- [math]\displaystyle{ \Pr\left[\,\left|\hat{f}_x-f_x\right|\le \epsilon n\,\right]\ge 1-\delta }[/math].
Note that this notion of approximation is with bounded additive error which is weaker than the notion of [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator, whose error bound is multiplicative.
With such weak accuracy guarantee, its is possible to give a succinct data structure whose size is determined only by the error bounds [math]\displaystyle{ \epsilon }[/math] and [math]\displaystyle{ \delta }[/math] but independent of [math]\displaystyle{ n }[/math], because only the frequencies of those heavy hitters (elements [math]\displaystyle{ x }[/math] with high frequencies [math]\displaystyle{ f_x\gt \epsilon n }[/math]) need to be memorized, and there are at most [math]\displaystyle{ 1/\epsilon }[/math] many such heavy hitters.
Count-min sketch
The count-min sketch given by Cormode and Muthukrishnan is an elegant data structure for frequency estimation.
The data structure is a two-dimensional [math]\displaystyle{ k\times m }[/math] integer array, where [math]\displaystyle{ k }[/math] and [math]\displaystyle{ m }[/math] are two parameters to be determined by the error bounds [math]\displaystyle{ \epsilon }[/math] and [math]\displaystyle{ \delta }[/math]. We still adopt the Uniform Hash Assumption to assume that we have access to [math]\displaystyle{ k }[/math] mutually independent uniform random hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[m] }[/math].
Count-min sketch (Cormode and Muthukrishnan 2003) - Suppose [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[m] }[/math] are uniform and independent random hash functions.
- Data structure construction: Given a sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math], the data structure is a two-dimensional [math]\displaystyle{ k\times m }[/math] integer array [math]\displaystyle{ CMS[k][m] }[/math] constructed as
- initialize all entries of [math]\displaystyle{ CMS[k][m] }[/math] to 0;
- for [math]\displaystyle{ i=1,2,\ldots,n }[/math], upon receiving [math]\displaystyle{ x_i }[/math]:
- for every [math]\displaystyle{ 1\le j\le k }[/math], evaluate [math]\displaystyle{ h_j(x_i) }[/math] and [math]\displaystyle{ CMS[j][h_j(x_i)]++ }[/math].
- Query resolution: Upon each query of an arbitrary [math]\displaystyle{ x\in\Omega }[/math],
- return [math]\displaystyle{ \hat{f}_x=\min_{1\le j\le k}CMS[j][h_j(x)] }[/math].
It is easy to see that the space cost of count-min sketch is [math]\displaystyle{ O(km) }[/math] memory words, or [math]\displaystyle{ O(km\log n) }[/math] bits. Each query is answered within time cost [math]\displaystyle{ O(k) }[/math], assuming that an evaluation of hash function can be done in unit or constant time. We then analyze the error bounds.
First, it is easy to observe that for any query [math]\displaystyle{ x\in\Omega }[/math] and every hash function [math]\displaystyle{ 1\le j\le k }[/math], it always holds for the corresponding entry in the count-min sketch
- [math]\displaystyle{ CMS[j][h_j(x)]\ge f_x }[/math],
because the appearances of element [math]\displaystyle{ x }[/math] in the input sequence contribute at least [math]\displaystyle{ f_x }[/math] to the value of [math]\displaystyle{ CMS[j][h_j(x)] }[/math].
Therefore, for any query [math]\displaystyle{ x\in\Omega }[/math] it always holds for the answer [math]\displaystyle{ \hat{f}_x=\min_{1\le j\le k}CMS[j][h_j(x)]\ge f_x }[/math], which means
- [math]\displaystyle{ \Pr\left[\,\left|\hat{f}_x- f_x\right|\ge\epsilon n\,\right]=\Pr\left[\,\hat{f}_x- f_x\ge\epsilon n\,\right]=\prod_{j=1}^k\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,],\quad\qquad({\color{red}\diamondsuit}) }[/math]
where the second equation is due to the mutual independence of random hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math].
It remains to upper bound the probability [math]\displaystyle{ \Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,] }[/math], which can be done by calculating the expectation of [math]\displaystyle{ CMS[j][h_j(x)] }[/math].
Proposition - For any [math]\displaystyle{ x\in\Omega }[/math] and every [math]\displaystyle{ 1\le j\le k }[/math], it holds that [math]\displaystyle{ \mathbb{E}\left[CMS[j][h_j(x)]\right]\le f_x+\frac{n}{m} }[/math].
Proof. The value of [math]\displaystyle{ CMS[j][h_j(x)] }[/math] is constituted by the frequency [math]\displaystyle{ f_x }[/math] of [math]\displaystyle{ x }[/math] and the frequencies [math]\displaystyle{ f_y }[/math] of all other elements [math]\displaystyle{ y\neq x }[/math] among [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math], thus
- [math]\displaystyle{ \begin{align} CMS[j][h_j(x)] &=f_x+\sum_{\scriptstyle y\in\{x_1,\ldots,x_n\}\setminus\{x\}\atop\scriptstyle h_j(y)=h_j(x)} f_y\\ &=f_x+\sum_{y\in\{x_1,\ldots,x_n\}\setminus\{x\}} f_y \cdot I[h_j(y)=h_j(x)] \end{align} }[/math]
where [math]\displaystyle{ I[h_j(y)=h_j(x)] }[/math] denotes the Boolean random variable that indicates the occurrence of event [math]\displaystyle{ h_j(y)=h_j(x) }[/math].
By linearity of expectation,
- [math]\displaystyle{ \mathbb{E}[CMS[j][h_j(x)]]=f_x+\sum_{y\in\{x_1,x_2,\ldots,x_n\}\setminus\{x\}} f_y \cdot \Pr[h_j(y)=h_j(x)] }[/math].
Due to Uniform Hash Assumption (UHA), [math]\displaystyle{ h_j:\Omega\to[m] }[/math] is a uniform random function. For any [math]\displaystyle{ y\neq x }[/math], the probability of hash collision is
- [math]\displaystyle{ \Pr[h_j(y)=h_j(x)]=\frac{1}{m} }[/math].
Therefore,
- [math]\displaystyle{ \begin{align} \mathbb{E}[CMS[j][h_j(x)]] &=f_x+\frac{1}{m}\sum_{y\in\{x_1,\ldots,x_n\}\setminus\{x\}} f_y \\ &\le f_x+\frac{1}{m}\sum_{y\in\{x_1,\ldots,x_n\}} f_y\\ &=f_x+\frac{n}{m}, \end{align} }[/math]
where the last equation is due to the obvious identity [math]\displaystyle{ \sum_{y\in\{x_1,\ldots,x_n\}}f_y=n }[/math].
- [math]\displaystyle{ \square }[/math]
The above proposition shows that for any [math]\displaystyle{ x\in\Omega }[/math] and every [math]\displaystyle{ 1\le j\le k }[/math]
- [math]\displaystyle{ \mathbb{E}\left[CMS[j][h_j(x)]-f_x\right]\le \frac{n}{m} }[/math].
Recall that [math]\displaystyle{ CMS[j][h_j(x)]\ge f_x }[/math] always holds, thus [math]\displaystyle{ CMS[j][h_j(x)]-f_x }[/math] is a positive random variable. By Markov's inequality, we have
- [math]\displaystyle{ \Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,]\le \frac{1}{\epsilon m} }[/math].
Combining with above equation [math]\displaystyle{ ({\color{red}\diamondsuit}) }[/math], we have
- [math]\displaystyle{ \Pr\left[\,\left|\hat{f}_x- f_x\right|\ge\epsilon n\,\right]=(\Pr[\,CMS[j][h_j(x)]-f_x\ge\epsilon n\,])^k\le \frac{1}{(\epsilon m)^k} }[/math].
By setting [math]\displaystyle{ m=\left\lceil\frac{\mathrm{e}}{\epsilon}\right\rceil }[/math] and [math]\displaystyle{ k=\left\lceil\ln\frac{1}{\delta}\right\rceil }[/math], the above error probability is bounded as [math]\displaystyle{ \frac{1}{(\epsilon m)^k}\le\delta }[/math].
For any positive [math]\displaystyle{ \epsilon }[/math] and [math]\displaystyle{ \delta }[/math], the count-min sketch gives a data structure of size [math]\displaystyle{ O(km)=O\left(\frac{1}{\epsilon}\log\frac{1}{\delta}\right) }[/math] (in memory words) and answering each query [math]\displaystyle{ x\in\Omega }[/math] in time [math]\displaystyle{ O(k)=O\left(\frac{1}{\epsilon}\right) }[/math] with the following accuracy guarantee:
- [math]\displaystyle{ \Pr\left[\,\left|\hat{f}_x- f_x\right|\le\epsilon n\,\right]\ge 1-\delta }[/math].