组合数学 (Fall 2017)/Extremal graph theory and 高级算法 (Fall 2018)/Hashing and Sketching: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "== Forbidden Cliques == Extremal graph theory studies the problems like "how many edges that a graph <math>G</math> can have, if <math>G</math> has some property?" === Mantel...")
 
imported>Etone
 
Line 1: Line 1:
== Forbidden Cliques ==
=Distinct Elements=
Extremal graph theory studies the problems like  "how many edges that a graph <math>G</math> can have, if <math>G</math> has some property?"
Consider the following problem of '''counting distinct elements''': Suppose that <math>\Omega</math> is a sufficiently large universe.
=== Mantel's theorem ===
*'''Input:''' a sequence of (not necessarily distinct) elements <math>x_1,x_2,\ldots,x_n\in\Omega</math>;
We consider a typical extremal problem for graphs: the largest possible number of edges of '''triangle-free''' graphs, i.e. graphs contains no <math>K_3</math>.
*'''Output:''' an estimation of the total number of distinct elements <math>z=|\{x_1,x_2,\ldots,x_n\}|</math>.


{{Theorem|Theorem (Mantel 1907)|
A straightforward way of solving this problem is to maintain a dictionary data structure, which costs at least linear (<math>O(n)</math>) space. For ''big data'', where <math>n</math> is very large, this is still too expensive. However, due to an information-theoretical argument, linear space is necessary if you want to compute the ''exact'' value of <math>z</math>.
:Suppose <math>G(V,E)</math> is graph on <math>n</math> vertice without triangles. Then <math>|E|\le\frac{n^2}{4}</math>.
 
Our goal is to relax the problem a little bit to significantly reduce the space cost by tolerating ''approximate'' answers. The form of approximation we consider is '''<math>(\epsilon,\delta)</math>-estimator'''.
{{Theorem|<math>(\epsilon,\delta)</math>-estimator|
: A random variable <math>\widehat{Z}</math> is an '''<math>(\epsilon,\delta)</math>-estimator''' of a quantity <math>z</math> if
::<math>\Pr[\,(1-\epsilon)z\le \widehat{Z}\le (1+\epsilon)z\,]\ge 1-\delta</math>.
: <math>\widehat{Z}</math> is said to be an '''unbiased estimator''' of <math>z</math> if <math>\mathbb{E}[\widehat{Z}]=z</math>.
}}
}}
Usually <math>\epsilon</math> is called '''approximation error''' and <math>\delta</math> is called '''confidence error'''.


We give three different proofs of the theorem. The first one uses induction and an argument based on pigeonhole principle. The second proof uses the famous Cauchy-Schwarz inequality in analysis. And the third proof uses another famous inequality: the inequality of the arithmetic and geometric mean.
We now present an elegant algorithm introduced by [https://en.wikipedia.org/wiki/Flajolet–Martin_algorithm  Flajolet and Martin] in 1984. The algorithm can be implemented in [https://en.wikipedia.org/wiki/Streaming_algorithm '''data stream model''']: The input elements <math>x_1,x_2,\ldots,x_n</math> is presented to the algorithm one at a time, where the size of data <math>n</math> is unknown to the algorithm. The algorithm maintains a value <math>\widehat{Z}</math> which is an <math>(\epsilon,\delta)</math>-estimator of the total number of distinct elements <math>z=|\{x_1,x_2,\ldots,x_n\}|</math>, using only a small amount of memory space to memorize (with loss) the data set <math>\{x_1,x_2,\ldots,x_n\}</math>.


{{Prooftitle|First proof. (pigeonhole principle)|
A famous quotation of Flajolet describes the performance of this algorithm as:
We prove an equivalent theorem: Any <math>G(V,E)</math> with <math>|V|=n</math> and <math>|E|>\frac{n^2}{4}</math> must have a triangle.


Use induction on <math>n</math>. The theorem holds trivially for <math>n\le 3</math>.
"Using only memory equivalent to 5 lines of printed text, you can estimate with a typical accuracy of 5% and in a single pass the total vocabulary of Shakespeare."


Induction hypothesis: assume the theorem hold for <math>|V|\le n-1</math>.  
== An estimator by hashing ==
Suppose that we can access to an idealized random hash function <math>h:\Omega\to[0,1]</math> which is uniformly distributed over all mappings from the universe <math>\Omega</math> to unit interval <math>[0,1]</math>.  


For <math>G</math> with <math>n</math> vertices, without loss of generality, assume that <math>|E|=\frac{n^2}{4}+1</math>, we will show that <math>G</math> must contain a triangle. Take a <math>uv\in E</math>, and let <math>H</math> be the subgraph of <math>G</math> induced by <math>V\setminus \{u,v\}</math>. Clearly, <math>H</math> has <math>n-2</math> vertices.
Recall that the input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> consists of <math>z=|\{x_1,x_2,\ldots,x_n\}|</math> distinct elements. These elements are mapped by the random function <math>h</math> to <math>z</math> hash values uniformly and independently distributed in <math>[0,1]</math>. We could maintain these hash values instead of the original elements, but this would still be too expensive because in the worst case we still have up to <math>n</math> distinct values to maintain. However, due to the idealized random hash function, the unit interval <math>[0,1]</math> will be partitioned into <math>z+1</math> subintervals by these <math>z</math> uniform and independent hash values. The typical length of the subinterval gives an estimation of the number <math>z</math>.
:'''Case.1:''' If <math>H</math> has <math>>\frac{(n-2)^2}{4}</math> edges, then by the induction hypothesis, <math>H</math> has a triangle.
 
:'''Case.2:''' If <math>H</math> has <math>\le\frac{(n-2)^2}{4}</math> edges, then at least <math>\left(\frac{n^2}{4}+1\right)-\frac{(n-2)^2}{4}-1=n-1</math> edges are between <math>H</math> and <math>\{u,v\}</math>. By pigeonhole principle, there must be a vertex in <math>H</math> that is adjacent to both <math>u</math> and <math>v</math>. Thus, <math>G</math> has a triangle.
{{Theorem|Proposition|
:<math>\mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\frac{1}{z+1}</math>.
}}
}}
 
{{Proof|
{{Prooftitle|Second proof. (Cauchy-Schwarz inequality)|(Mantel's original proof)
The input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> consisting of <math>z</math> distinct elements are mapped to <math>z</math> random hash values uniformly and independently distributed in <math>[0,1]</math>. These <math>z</math> hash values partition the unit interval <math>[0,1]</math> into <math>z+1</math> subintervals <math>[0,v_1],[v_1,v_2],[v_2,v_3]\ldots,[v_{z-1},v_z],[v_z,1]</math>, where <math>v_i</math> denotes the <math>i</math>-th smallest value among all hash values <math>\{h(x_1),h(x_2),\ldots,h(x_n)\}</math>. Clearly we have
For any edge <math>uv\in E</math>, no vertex can be a neighbor of both <math>u</math> and <math>v</math>, or otherwise there will be a triangle. Thus, for any edge <math>uv\in E</math>, <math>d_u+d_v\le n</math>. It follows that
:<math>v_1=\min_{1\le i\le n}h(x_i)</math>.
:<math>\sum_{uv\in E}(d_u+d_v)\le n|E|</math>.
Meanwhile, since all hash values are uniformly and independently distributed in <math>[0,1]</math>, the lengths of all subintervals <math>v_1, v_2-v_1, v_3-v_2,\ldots, v_z-v_{z-1}, 1-v_z</math> are identically distributed. By symmetry, they have the same expectation, therefore
Note that <math>d(v)</math> appears exactly <math>d_v</math> times in the sum, so that
:<math>\sum_{uv\in E}(d_u+d_v)=\sum_{v\in V}d_v^2</math>.
Applying Chauchy-Schwarz inequality,
:<math>
:<math>
n|E|\ge \sum_{uv\in E}(d_u+d_v)=\sum_{v\in V}d_v^2\ge\frac{\left(\sum_{v\in V}d_v\right)^2}{n}=\frac{4|E|^2}{n},
(z+1)\mathbb{E}[v_1]=
\mathbb{E}[v_1]+\sum_{i=1}^{z-1}\mathbb{E}[v_{i+1}-v_i]+\mathbb{E}[1-v_z]
=\mathbb{E}\left[v_1+(v_2-v_1)+(v_3-v_2)+\cdots+(v_{z}-v_{z-1})+1-v_z\right]
=1,
</math>
</math>
where the last equation is due to Euler's equality <math>\sum_{v\in V}d_v=2|E|</math>. The theorem follows.
which implies that
:<math>\mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\mathbb{E}[v_1]=\frac{1}{z+1}</math>.
}}
}}


{{Prooftitle|Third proof. (inequality of the arithmetic and geometric mean)|
The quantity <math>\min_{1\le i\le n}h(x_i)</math> can be computed with small space cost (for storing the current smallest hash value) by scan the input sequence in a single pass. Because as we proved its expectation is <math>\frac{1}{z+1}</math>, the smallest hash value <math>Y=\min_{1\le i\le n}h(x_i)</math> gives an unbiased estimator for <math>\frac{1}{z+1}</math>. However, <math>\frac{1}{Y}-1</math> is not necessarily a good estimator for <math>z</math>. Actually, it is a rather poor estimator. Consider for example when <math>z=1</math>, all input elements are the same. In this case, there is only one hash value and <math>Y=\min_{1\le i\le n}h(x_i)</math> is distributed uniformly over <math>[0,1]</math>, thus <math>\frac{1}{Y}-1</math> fails to be close enough to the correct answer 1 with high probability.
Assume that <math>G(V,E)</math> has <math>|V|=n</math> vertices and is triangle-free.


Let <math>A</math> be the largest independent set in <math>G</math> and let <math>\alpha=|A|</math>.  
==Flajolet-Martin algorithm==
Since <math>G</math> is triangle-free, for very vertex <math>v</math>, all its neighbors must form an independent set, thus <math>d(v)\le \alpha</math> for all <math>v\in V</math>.
The reason that the above estimator of a single hash function performs poorly is that the unbiased estimator <math>\min_{1\le i\le n}h(x_i)</math> has large variance. So a natural way to reduce this variance is to have multiple independent hash functions and take the average. This is precisely what [https://en.wikipedia.org/wiki/Flajolet–Martin_algorithm '''''Flajolet-Martin algorithm'''''] does.


Take <math>B=V\setminus A</math> and let <math>\beta=|B|</math>.
Suppose that we can access to <math>k</math> independent random hash functions <math>h_1,h_2,\ldots,h_k</math>, where each <math>h_j:\Omega\to[0,1]</math> is uniformly and independently distributed over all functions mapping <math>\Omega</math> to <math>[0,1]</math>. Here <math>k</math> is a parameter to be fixed by the desired approximation error <math>\epsilon</math> and confidence error <math>\delta</math>. The ''Flajolet-Martin algorithm'' is given by the following pseudocode.
Since <math>A</math> is an independent set, all edges in <math>E</math> must have at least one endpoint in <math>B</math>. Counting the edges in <math>E</math> according to their endpoints in <math>B</math>, we obtain <math>|E|\le\sum_{v\in B}d_v</math>. By the inequality of the arithmetic and geometric mean,
 
:<math>|E|\le\sum_{v\in B}d_v\le\alpha\beta\le\left(\frac{\alpha+\beta}{2}\right)^2=\frac{n^2}{4}</math>.
{{Theorem|''Flajolet-Martin algorithm'' (Flajolet and Martin 1984)|
:Suppose that <math>h_1,h_2,\ldots,h_k:\Omega\to[0,1]</math> are <math>k</math> uniform and independent random hash functions, where <math>k</math> is a parameter to be fixed later.
-----
:Scan the input sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math> in a single pass to compute:
::* <math>Y_j=\min_{1\le i\le n}h_j(x_i)</math> for every <math>j=1,2,\ldots,k</math>;
::* average value <math>\overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j</math>;
:return <math>\widehat{Z}=\frac{1}{\overline{Y}}-1</math> as the estimator.
}}
}}


=== Turán's theorem ===
The algorithm is easy to implement in data stream model, with a space cost of storing <math>k</math> hash values. The following theorem guarantees that the algorithm returns an <math>(\epsilon,\delta)</math>-estimator of the total number of distinct elements for a suitable <math>k=O\left(\frac{1}{\epsilon^2\delta}\right)</math>.
The famous Turán's theorem generalizes the Mantel's theorem for triangles to cliques of any specific size. This theorem is one of the most important results in extremal combinatorics, which initiates the studies of extremal graph theory.
{{Theorem|Theorem|
{{Theorem|Theorem (Turán 1941)|
:For any <math>\epsilon,\delta<1/2</math>, if <math>k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil</math> then the output <math>\widehat{Z}</math> always gives an <math>(\epsilon,\delta)</math>-estimator of the correct answer <math>z</math>.
:Let <math>G(V,E)</math> be a graph with <math>|V|=n</math>. If <math>G</math> has no <math>r</math>-clique, <math>r\ge 2</math>, then
::<math>|E|\le\frac{r-2}{2(r-1)}n^2</math>.
}}
}}


We give an example of graphs with many edges which does not contain <math>K_r</math>.
In the following we prove this main theorem.  
 
Partition <math>V</math> into <math>r-1</math> disjoint classes <math>V=V_1\cup V_2\cup\cdots\cup V_{r-1}</math>, <math>n_i=|V_i|</math>, <math>n_1+n_2+\cdots+n_{r-1}=n</math>. For every two vertice <math>u,v</math>, <math>uv\in E</math> if and only if <math>u\in V_i</math> and <math>v\in V_j</math> for distinct <math>V_i</math> and <math>V_j</math>. The resulting graph is a '''complete <math>(r-1)</math>-partite graph''', denoted <math>K_{n_1,n_2,\ldots,n_{r-1}}</math>. It is obvious that any <math>(r-1)</math>-partite graph contains no <math>r</math>-clique since only those vertices from different classes can be adjacent.  


A <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> has <math>\sum_{i<j}n_i n_j\,</math> edges, which is maximized when the numbers <math>n_i</math> are divided as evenly as possible, that is, if <math>n_i\in\left\{\left\lfloor\frac{n}{r-1}\right\rfloor,\left\lceil\frac{n}{r-1}\right\rceil\right\}</math> for every <math>1\le i\le r-1</math>.  
An obstacle to analyze the estimator <math>\widehat{Z}=\frac{1}{\overline{Y}}-1</math> is that it is a nonlinear function of <math>\overline{Y}</math> who is easier to analyze. Nevertheless, we observe that <math>\widehat{Z}</math> is an <math>(\epsilon,\delta)</math>-estimator of <math>z</math> as long as  <math>\overline{Y}</math> is an <math>(\epsilon/2,\delta)</math>-estimator of <math>\frac{1}{z+1}</math>. This can be deduced by just verifying the following:
:<math>\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1} \implies (1-\epsilon)z\le\frac{1}{\overline{Y}}-1\le (1+\epsilon)z</math>,
for <math>\epsilon<\frac{1}{2}</math>. Therefore,
:<math>\Pr\left[\,(1-\epsilon)z\le \widehat{Z} \le (1+\epsilon)z\,\right]\ge \Pr\left[\,\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1}\,\right]
=\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]</math>.
It is then sufficient to show that <math>\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta</math> for proving the main theorem above. We will see that this is equivalent to show the concentration inequality
:<math>\Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta\quad\qquad({\color{red}*})</math>.


{{Theorem|Definition|
{{Theorem|Lemma|
:We call a complete multipartite graph <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> with <math>n_i\in\left\{\left\lfloor\frac{n}{r-1}\right\rfloor,\left\lceil\frac{n}{r-1}\right\rceil\right\}</math> for every <math>i</math> a ''' Turán graph''', denoted <math>T(n,r-1)</math>.
:The followings hold for each <math>Y_j</math>, <math>j=1,2\ldots,k</math>, and <math>\overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j</math>:
:*<math>\mathbb{E}\left[\overline{Y}\right]=\mathbb{E}\left[Y_j\right]=\frac{1}{z+1}</math>;
:*<math>\mathbf{Var}\left[Y_j\right]\le\frac{1}{(z+1)^2}</math>, and consequently <math>\mathbf{Var}\left[\overline{Y}\right]\le\frac{1}{k(z+1)^2}</math>.
}}
}}
;Example:Turán graph <math>T(13,4)</math>
{{Proof|
[[File:Turan 13-4.svg|center|260px|Turán graph <math>T(13,4)</math>]]
As in the case of single hash function, by symmetry it holds that <math>\mathbb{E}[Y_j]=\frac{1}{z+1}</math> for every <math>j=1,2,\ldots,k</math>. Therefore,
:<math>\mathbb{E}\left[\overline{Y}\right]=\frac{1}{k}\sum_{j=1}^k\mathbb{E}[Y_j]=\frac{1}{z+1}</math>.
Recall that each <math>Y_j</math> is the minimum of <math>z</math> random hash values uniformly and independently distributed over <math>[0,1]</math>. By geometry probability, it holds that for any <math>y\in[0,1]</math>,
:<math>\Pr[Y_j>y]=(1-y)^z</math>,
which means <math>\Pr[Y_j\le y]=1-(1-y)^z</math>. Taking the derivative with respect to <math>y</math>, we obtain the probability density function of random variable <math>Y_j</math>, which is <math>z(1-y)^{z-1}</math>.


Turán's theorem has been proved for many times by different mathematicians, with different tools. We show just a few.
We then compute the second moment.
 
:<math>\mathbb{E}[Y_j^2]=\int^{1}_0y^2z(1-y)^{z-1}\,\mathrm{d}y=\frac{2}{(z+1)(z+2)}</math>.
The first proof uses induction;  the second proof uses a technique called "weight shifting"; and the third proof uses the probabilistic method. All of them are very powerful and frequently used proof techniques.
The variance is bounded as
 
:<math>\mathbf{Var}\left[Y_j\right]=\mathbb{E}\left[Y_j^2\right]-\mathbb{E}\left[Y_j\right]^2=\frac{2}{(z+1)(z+2)}-\frac{1}{(z+1)^2}\le\frac{1}{(z+1)^2}</math>.
{{Prooftitle|First proof. (induction)|(Turán's original proof)
Due to the (pairwise) independence between <math>Y_j</math>'s,
 
::<math>\mathbf{Var}\left[\overline{Y}\right]=\mathbf{Var}\left[\frac{1}{k}\sum_{j=1}^kY_j\right]=\frac{1}{k^2}\sum_{j=1}^k\mathbf{Var}\left[Y_j\right]\le \frac{1}{k(z+1)^2}</math>.
Induction on <math>n</math>. It is easy to verify that the theorem holds for <math>n<r</math>.
 
Let <math>G</math> be a graph on <math>n</math> vertices without <math>r</math>-cliques where <math>n\ge r</math>. Suppose that <math>G</math> has a maximum number of edges among such graphs. <math>G</math> certainly has <math>(r-1)</math>-cliques, since otherwise we could add edges to <math>G</math>. Let <math>A</math> be an <math>(r-1)</math>-clique and let <math>B=V\setminus A</math>. Clearly <math>|A|=r-1</math> and <math>|B|=n-r+1</math>.
 
By the  induction hypothesis, since <math>B</math> has no <math>r</math>-cliques, <math>|E(B)|\le\frac{r-2}{2(r-1)}(n-r+1)^2</math>. And <math>E(A)={r-1\choose 2}</math>. Since <math>G</math> has no <math>r</math>-clique, every <math>v\in B</math> is adjacent to at most <math>r-2</math> vertices in <math>A</math>, since otherwise <math>A</math> and <math>v</math> would form an <math>r</math>-clique. We obtain that the number edges crossing between <math>A</math> and <math>B</math> is <math>|E(A,B)|\le (r-2)|B|=(r-2)(n-r+1)</math>. Combining everything together,
:<math>|E|=|E(A)|+|E(B)|+|E(A,B)|\le {r-1\choose 2}+\frac{r-2}{2(r-1)}(n-r+1)^2+(r-2)(n-r+1)=\frac{r-2}{2(r-1)}n^2</math>.
}}
}}


{{Prooftitle|Second proof. (weight shifting)|(due to Motzkin and Straus)
We resume to prove the inequality <math>({\color{red}*})</math>. By [[高级算法_(Fall_2018)/Basic_tail_inequalities#Chebyshev.27s_inequality|Chebyshev's inequality]], it holds that  
 
:<math>\Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|> \frac{\epsilon/2}{z+1}\,\right]
Assign each vertex <math>v\in V</math> a nonnegative weight <math>w_v\ge 0</math>, and assume that <math>\sum_{v\in V}w_v=1</math>. We try to maximize the quantity
\le\frac{4}{\epsilon^2}(z+1)^2\mathbf{Var}\left[\overline{Y}\right]
:<math>S=\sum_{uv\in E}w_uw_v</math>.
\le\frac{4}{\epsilon^2k}</math>.
Let <math>W_u=\sum_{v:v\sim u}w_v\,</math> be the sum of the weights of <math>u</math>'s neighbors.
When <math>k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil</math>, this probability is at most <math>\delta</math>. The inequality <math>({\color{red}*})</math> is proved. As we discussed above, this proves the main theorem.
Note that <math>S</math> can also be computed as <math>S=\frac{1}{2}\sum_{u\in V}w_uW_u</math>.
For any nonadjacent pair of vertices <math>u\not\sim v</math>, supposed that <math>W_u\ge W_v</math>, then for any <math>\epsilon\ge 0</math>,
:<math>(w_u+\epsilon)W_u+(w_v-\epsilon)W_v\ge w_uW_u+w_vW_v</math>.
This means that we do not decrease <math>S</math> by shifting all of the weight of the vertex <math>v</math> to the vertex <math>u</math>. It follows that <math>S</math> is maximized when all of the weight is concentrated on a complete subgraph, i.e., a clique.


Now if <math>w_u>w_v>0</math>, then choose <math>\epsilon</math> with <math>0<\epsilon<w_u-w_v</math> and change <math>w_u'=w_u-\epsilon</math> and <math>w_v'=w_v+\epsilon</math>. This changes <math>S</math> to <math>S'=S+\epsilon(w_u-w_v)-\epsilon^2>S</math>. Thus, the maximal value of <math>S</math> is attained when all nonzero weights are equal and concentrated on a clique.
==Uniform Hash Assumption (UHA)==
In above we assume we can access to idealized random hash functions <math>h:\Omega\to[0,1]</math> with real values. With a more careful calculation, one can show the same performance guarantee for hash functions with discrete values as <math>h:\Omega\to[M]</math> where <math>M=\mathrm{poly}(n)</math>, that is, the hash values are strings of <math>O(\log n)</math> bits.


<math>G</math> has at most an <math>(r-1)</math>-clique, thus <math>S\le{r-1\choose 2}\frac{1}{(r-1)^2}=\frac{r-2}{2(r-1)}</math>.
Even with such improved analysis, a uniform random discrete function in form of <math>h:[N]\to[M]</math> is not really efficient to store or to compute. By an information-theretical argument, it takes at least <math>\Omega(N\log M)</math> bits to represent such a random hash function because this is the entropy of such uniform random function.  


As we argued above, this inequality hold for any nonnegative weight assignments with <math>\sum_{v\in V}w_v=1</math>. In particular, for the case that all <math>w_v=\frac{1}{n}</math>,
For the convenience of analysis, it is common to assume the following '''Uniform Hash Assumption (UHA)''' also known as '''Simple Uniform Hash Assumption (SUHA)'''.
:<math>S=\sum_{uv\in E}w_uw_v=\frac{|E|}{n^2}</math>.
{{Theorem|Uniform Hash Assumption (UHA)|
Thus,
:A ''uniform'' random function <math>h:[N]\rightarrow[M]</math> is available and the computation of <math>h</math> is efficient.
:<math>\frac{|E|}{n^2}\le \frac{r-2}{2(r-1)}</math>,
which implies the theorem.
}}
}}


{{Prooftitle|Third proof. (the probabilistic method)|(due to Alon and Spencer)
= Set  Membership=
A basic question in Computer Science is:
:"<math>\mbox{Is }x\in S?</math>"
for a set <math>S</math> and an element <math>x</math>. This is the '''set membership''' problem.


Write <math>\omega(G)</math> for the number of vertices in a largest clique, called the '''clique number''' of <math>G</math>.
Formally, given an arbitrary set <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math>, we want to use a succinct '''data structure''' to represent this set <math>S</math>, so that upon each '''query''' of any element <math>x</math> from the universe <math>[N]</math>, the question of whether <math>x\in S</math> is efficiently answered. The complexity of such data structure is measured in two-fold:
:'''Claim:''' <math>\omega(G)\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.
* '''space cost''': size of the data structure to represent a set <math>S</math> of size <math>n</math>;
We prove this by the probabilistic method. Fix a random ordering of vertices in <math>V</math>, say <math>v_1,v_2,\ldots,v_n</math>. We construct a clique as follows:
* '''time cost''': time complexity of answering each query by accessing to the data structure.
*for <math>i=1,2,\ldots, n</math>, add <math>v_i</math> to <math>S</math> iff all vertices in current <math>S</math> are adjacent to <math>v_i</math>.
It is obvious that an <math>S</math> constructed in this way is a clique. We now show that <math>\mathbf{E}[|S|]\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.


Let <math>X_v</math> be the random variable that indicates whether <math>v\in S</math>, i.e.,
Suppose that the universe <math>\Omega</math> is of size <math>N</math>. Clearly, the membership problem can be solved by a '''dictionary data structure''', e.g.:
:<math>
* '''sorted table / balanced search tree''': with space cost <math>O(n\log N)</math> bits and time cost <math>O(\log n)</math>;
X_v=\begin{cases}
* '''perfect hashing''' of ''Fredman, Komlós & Szemerédi'': with space cost <math>O(n\log N)</math> bits and time cost <math>O(1)</math>.
1 & v\in S,\\
0 & \mbox{otherwise.}
\end{cases}
</math>
Note that a vertex <math>v\in S</math> if <math>v</math> is ranked before all its <math>n-d_v-1</math> non-neighbors in the random ordering. The probability that this event occurs is <math>\frac{1}{n-d_v}</math>. Thus,
:<math>\mathbf{E}[X_v]=\Pr[v\in S]\ge\frac{1}{n-d_v}.</math>
Observe that <math>|S|=\sum_{v\in V}X_v</math>. Due to linearity of expectation,
:<math>\mathbf{E}[|S|]=\sum_{v\in V}\mathbf{E}[X_v]\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.
There must exists a clique of at least such size, so that <math>\omega(G)\ge\sum_{v\in V}\frac{1}{n-d_v}</math>. The claim is proved.


Apply the Cauchy-Schwarz inequality
Note that <math>\log{N\choose n}=\Theta\left(n\log \frac{N}{n}\right)</math> is the entropy of sets <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math> of size <math>N</math>. Therefore it is necessary to use so many bits to represent a set without losing any information. Nevertheless, we can do better than this if we use a loss representation of the input set <math>S</math> and tolerate a bounded error in answering queries. Such lossy representation of data is sometimes called a '''''sketch'''''.
:<math>\left(\sum_{v\in V}a_vb_v\right)^2\le\left(\sum_{v\in V}^na_v^2\right)\left(\sum_{v\in V}^nb_v^2\right)</math>.
Set <math>a_v=\sqrt{n-d_v}</math> and <math>b_v=\frac{1}{\sqrt{n-d_v}}</math>, then <math>a_vb_v=1</math> and so
:<math>n^2\le\sum_{v\in V}(n-d_v)\sum_{v\in V}\frac{1}{n-d_v}\le\omega(G)\sum_{v\in V}(n-d_v).</math>
By the assumption of Turán's theorem, <math>\omega(G)\le r-1</math>. Recall the handshaking lemma <math>2|E|=\sum_{v\in V}d_v</math>. The above inequality gives us
:<math>n^2\le (r-1)(n^2-2|E|)</math>,
which implies the theorem.
}}


Our last proof uses the idea of vertex duplication. It does not only prove the edge bound of Turán's theorem, but also shows that Turán graphs are the <font color=red>only</font> possible extremal graphs.
== Bloom filter ==
{{Prooftitle|Fourth proof.|
The Bloom filter is a space-efficient hash table that solves the '''approximate membership''' problem with one-sided error (''false positive'').
Let <math>G(V,E)</math> be a <math>r</math>-clique-free graph on <math>n</math> vertices with a maximum number of edges.
:'''Claim:''' <math>G</math> does not contain three vertices <math>u,v,w</math> such that <math>uv\in E</math> but <math>uw\not\in E, vw\not\in E</math>.
Suppose otherwise. There are two cases.
* '''Case.1:''' <math>d(w)<d(u)</math> or <math>d(w)<d(v)</math>. Without loss of generality, suppose that <math>d(w)<d(u)</math>. We duplicate <math>u</math> by creating a new vertex <math>u'</math> which has exactly the same neighbors as <math>u</math> (but <math>uu'</math> is not an edge). Such duplication will not increase the clique size. We then remove <math>w</math>. The resulting graph <math>G'</math> is still <math>r</math>-clique-free, and has <math>n</math> vertices. The number of edges in <math>G'</math> is
::<math>|E(G')|=|E(G)|+d(u)-d(w)>|E(G)|\,</math>,
:which contradicts the assumption that <math>|E(G)|</math> is maximal.
* '''Case.2:''' <math>d(w)\ge d(u)</math> and <math>d(w)\ge d(v)</math>. Duplicate <math>w</math> twice and delete <math>u</math> and <math>v</math>. The new graph <math>G'</math> has no <math>r</math>-clique, and the number of edges is
::<math>|E(G')|=|E(G)|+2d(w)-(d(u)+d(v)+1)>|E(G)|\,</math>.
:Contradiction again.


The claim implies that <math>uv\not\in E</math> defines an equivalence relation on vertices (to be more precise, it guarantees the transitivity of the relation, while the reflexivity and symmetry hold directly). Graph <math>G</math> must be a complete multipartite graph <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> with <math>n_1+n_2+\cdots +n_{r-1}=n</math>. Optimize the edge number, we have the Turán graph.
Given a set <math>S</math> of <math>n</math> elements from a universe <math>\Omega</math>, a Bloom filter consists of an array <math>A</math> of <math>cn</math> bits, and <math>k</math> hash functions <math>h_1,h_2,\ldots,h_k</math> map <math>\Omega</math> to <math>[cn]</math>, where both <math>c</math> and <math>k</math> are parameters that we can try to optimize later.
}}


== Forbidden Cycles ==
As before, we assume the '''Uniform Hash Assumption (UHA)''': <math>h_1,h_2,\ldots,h_k</math> are mutually independent hash function where each <math>h_i</math> is a uniform random hash function <math>h_i:\Omega\to[cn]</math>.
Another direction to generalize Mantel's theorem other than Turán's theorem is to see a triangle as a 3-cycle rather than 3-clique. We then ask for the extremal bound for graphs without certain cycle structures.
Recall that the '''girth''' of a graph <math>G</math> is the length of the shortest cycle in <math>G</math>. A graph is triangle-free if and only if its girth <math>g(G)\ge 4</math>.
Matel's theorem can be seen as a bound on the edge number of graphs with girth <math>g(G)\ge 4</math>. The next theorem extends this bound to the graphs with <math>g(G)\ge 5</math>, i.e., graphs without triangles and quadrilaterals ("squares").


{{Theorem|Theorem|
The Bloom filter works as follows:
:Let <math>G(V,E)</math> be a graph on <math>n</math> vertices. If girth <math>g(G)\ge 5</math> then <math>|E|\le\frac{1}{2}n\sqrt{n-1}</math>.
{{Theorem|Bloom filter|
:Suppose <math>h_1,h_2,\ldots,h_k:\Omega\to[cn]</math> are uniform and independent random hash functions.
-----
:'''Data structure construction:''' Given a set <math>S\subset\Omega</math> of size <math>n=|S|</math>, the data structure is a Boolean array <math>A</math> of <math>cn</math> bits constructed as
:* initialize all <math>cn</math> bits of the Boolean array <math>A</math> to 0;
:* for each <math>x\in S</math>, let <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math>.
----
:'''Query resolution:''' Upon each query of an arbitrary <math>x\in\Omega</math>,
:* answer "yes" if <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math> and "no" if otherwise.
}}
}}
{{Proof|
The Boolean array is our data structure, whose size is <math>cn</math> bits. With Uniform Hash Assumption (UHA), the time cost of the data structure for answering each query is <math>O(k)</math>.
Suppose <math>g(G)\ge 5</math>. Let <math>v_1,v_2,\ldots,v_d</math> be the neighbors of a vertex <math>u</math>, where <math>d=d(u)</math>. Let <math>S_i=\{v\in V\mid v\sim v_i\wedge v\neq u\}</math> be the set of neighbors of <math>v_i</math> other than <math>u</math>.


* For any <math>v_i,v_j</math>, <math>v_iv_j\not\in E</math> since <math>G</math> has no triangle. Thus, <math>S_i\cap\{u,v_1,v_2,\ldots,v_d\}=\emptyset</math> for every <math>i</math>.
When the answer returned by the algorithm is "no", it holds that <math>A[h_i(x)]=0</math> for some <math>1\le i\le k</math>, in which case the query <math>x</math> must not belong to the set <math>S</math>. Thus, the Bloom filter has no false negatives.
* No vertex other than <math>u</math> can be adjacent to more than one vertices in <math>v_1,v_2,\ldots,v_d</math> since there is no <math>C_4</math> in <math>G</math>. Thus, <math>S_i\cap S_j=\emptyset</math> for any distinct <math>i</math> and <math>j</math>.


Therefore, <math>\{u,v_1,v_2,\ldots,v_d\}\cup S_1\cup S_2\cup\cdots\cup S_d\subseteq V</math> implies that
On the other hand, when the answer returned by the algorithm is "yes", <math>A[h_i(x)]=1</math> for all <math>1\le i\le k</math>. It is still possible for some  <math>x\not\in S</math> that all bits  <math>A[h_i(x)]</math> are set by elements in <math>S</math>. We want to bound such false positive, that is, the following probability for an  <math>x\not\in S</math>:
:<math>(d+1)+|S_1|+|S_2|+\cdots+|S_d|=(d+1)+(d(v_1)-1)+(d(v_2)-1)+\cdots+(d(v_d)-1)\le n</math>,
:<math>\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]</math>,
so that <math>\sum_{v:v\sim u}d(v)\le n-1</math>.
which by independence between different hash functions and by symmetry is equal to:
:<math>\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k</math>.
For an element <math>x\not\in S</math>, its hash value <math>h_1(x)</math> is independent of all hash values <math>h_i(y)</math> for all <math>1\le i\le k</math> and all <math>y\in S</math>. This is due to the Uniform Hash Assumption. The hash value <math>h_1(x)</math> of <math>x\not\in S</math> is then independent of the content of the array <math>A</math>. Therefore, the probability of this position <math>A[h_1(x)]</math> missed by all <math>kn</math> updates to the Boolean array <math>A</math> caused by all <math>n</math> elements in <math>S</math> is:
:<math>
\Pr[\, A[h_1(x)]=0\,]=\left(1-\frac{1}{cn}\right)^{kn}\approx e^{-k/c}.
</math>


By Cauchy-Schwarz inequality,
Putting everything together, for any <math>x\not\in S</math>, the false positive is bounded as:
:<math>n(n-1)\ge \sum_{u\in V}\sum_{v:v\sim u}d(v)=\sum_{v\in V}d(v)^2\ge\frac{\left(\sum_{v\in V}d(v)\right)}{n}=\frac{4|E|^2}{n}</math>,
:<math>
which implies that <math>|E|\le\frac{1}{2}n\sqrt{n-1}</math>.
\begin{align}
}}
\Pr[\,\text{wrongly answer ''yes''}\,]
 
&=\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]\\
== Erdős–Stone theorem ==
&=\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k\\
We introduce a notation for the number of edges in extremal graphs with a specific forbidden substructure.
&=\left(1-\left(1-\frac{1}{cn}\right)^{kn}\right)^k\\
{{Theorem|Definition|
&\approx \left(1- e^{-k/c}\right)^k
:Let <math>\mathrm{ex}(n,H)</math> denote the largest number of edges that a graph <math>G\not\supseteq H</math> on <math>n</math> vertices can have.
\end{align}
}}
</math>
With this notation, Turán's theorem can be restated as
which is <math>(0.6185)^c</math> when <math>k=c\ln 2</math>.
{{Theorem|Turán's theorem (restated)|
:<math>\mathrm{ex}(n,K_r)\le\frac{r-2}{2(r-1)}n^2</math>.
}}


Let <math>K_s^r=K_{\underbrace{s,s,\cdots,s}_{r}}</math> be the complete <math>r</math>-partite graph with <math>s</math> vertices in each class, i.e., the Turán graph <math>T(rs,r)</math>.
Bloom filter solves the membership query with a small constant error of false positives using a data structure of <math>O(n)</math> bits which answers each query with <math>O(1)</math> time cost.
The Erdős–Stone theorem (also referred as the '''fundamental theorem of extremal graph theory''') gives an asymptotic bound on <math>\mathrm{ex}(n,K_s^r)</math>, i.e., the largest number of edges that an <math>n</math>-vertex graph can have to not contain <math>K_s^r</math>.


{{Theorem|Fundamental theorem of extremal graph theory (Erdős–Stone 1946)|
= Frequency Estimation=
:For any integers <math>r\ge 2</math> and <math>s\ge 1</math>, and any <math>\epsilon>0</math>, if <math>n</math> is sufficiently large then every graph on <math>n</math> vertices and with at least <math>\left(\frac{r-2}{2(r-1)}+\epsilon\right)n^2</math> edges contains <math>K_{r,s}</math> as a subgraph, i.e.,
Suppose that <math>\Omega</math> is the data universe. The '''frequency estimation''' problem is defined as follows.
:::<math>\mathrm{ex}(n,K_s^r)= \left(\frac{r-2}{2(r-1)}+o(1)\right)n^2</math>.
*'''Data:''' a sequence of (not necessarily distinct) elements <math>x_1,x_2,\ldots,x_n\in\Omega</math>;
}}
*'''Query:''' an element <math>x\in\Omega</math>;
*'''Output:''' an estimation <math>\hat{f}_x</math> of the frequency <math>f_x\triangleq|\{i\mid x_i=x\}|</math> of <math>x</math> in input data.


The theorem is called fundamental because of its single most important corollary: it relate the extremal bound for an arbitrary subgraph <math>H</math> to a very natural parameter of <math>H</math>, its chromatic number.
We still want to give an algorithm in the data stream model: the algorithm scan the input sequence <math>x_1,x_2,\ldots,x_n</math> to construct a succinct data structure, such that upon each query of <math>x\in\Omega</math>, the algorithm returns an estimation of the frequency <math>f_x</math>.


Recall that <math>\chi(G)</math> is the '''chromatic number''' of <math>G</math>, the smallest number of colors that one can use to color the vertices so that no adjacent vertices have the same color.
Clearly this problem can always be solved by storing all appeared distinct elements along with their frequencies. However, the space cost of this straightforward solution is rather high. Instead, we want to use a lossy representation (a ''sketch'') of input data which uses significantly less space but can still answer queries with tolarable accuracy.  


{{Theorem|Corollary|
Formally, upon each query of <math>x\in\Omega</math>, the algorithm should return an answer <math>\hat{f}_x</math> satisfying:
:For every nonempty graph <math>H</math>,
:<math>\Pr\left[\,\left|\hat{f}_x-f_x\right|\le \epsilon n\,\right]\ge 1-\delta</math>.
::<math>\lim_{n\rightarrow\infty}\frac{\mathrm{ex}(n,H)}{{n\choose 2}}=\frac{\chi(H)-2}{\chi(H)-1}</math>.
Note that this notion of approximation is with bounded ''additive'' error which is weaker than the notion of <math>(\epsilon,\delta)</math>-estimator, whose error bound is ''multiplicative''.  
}}
{{Prooftitle|Proof of corollary|
Let <math>r=\chi(H)</math>.  


Note that <math>T(n,r-1)</math> can be colored with <math>r-1</math> colors, one color for each part. Thus, <math>H\not\subseteq T(n,r-1)</math>, since otherwise <math>H</math> can also be colored with <math>r-1</math> colors, contradicting that <math>\chi(H)=1</math>. By definition, <math>\mathrm{ex}(n,H)</math> is the maximum number of edges that an <math>n</math>-vertex graph <math>G\not\supseteq H</math> can have. Thus,
With such weak accuracy guarantee, its is possible to give a succinct data structure whose size is determined only by the error bounds <math>\epsilon</math> and <math>\delta</math> but independent of <math>n</math>, because only the frequencies of those '''heavy hitters''' (elements <math>x</math> with high frequencies <math>f_x>\epsilon n</math>) need to be memorized, and there are at most <math>1/\epsilon</math> many such heavy hitters.
:<math>|T(n,r-1)|\le\mathrm{ex}(n,H)</math>.
It is not hard to see that
:<math>|T(n,r-1)|\ge {r-1\choose 2}\left\lfloor\frac{n}{r-1}\right\rfloor^2\ge{r-1\choose 2}\left(\frac{n}{r-1}-1\right)^2=\left(\frac{r-2}{2(r-1)}-o(1)\right)n^2</math>.


On the other hand, any finite graph <math>H</math> with chromatic number <math>r</math> has that <math>H\subseteq K_s^r</math> for all sufficiently large <math>s</math>. We just connect all pairs of vertices from different color classes. Thus,
== Count-min sketch==
:<math>\mathrm{ex}(n,H)\le\mathrm{ex}(n,K_s^r)</math>.
{{Theorem|Count-min sketch|
Due to Erdős–Stone theorem,
:Suppose <math>h_1,h_2,\ldots,h_k:\Omega\to[m]</math> are uniform and independent random hash functions.
:<math>\mathrm{ex}(n,K_s^r)=\left(\frac{r-2}{2(r-1)}+o(1)\right)n^2</math>.
-----
Altogether, we have
:'''Data structure construction:''' Given a sequence <math>x_1,x_2,\ldots,x_n\in\Omega</math>, the data structure is a two-dimensional <math>k\times m</math> integer array <math>CMS[k][m]</math> constructed as
:<math>
:*initialize all entries of <math>CMS[k][m]</math> to 0.
\frac{r-2}{r-1}-o(1)\le\frac{|T(n,r-1)|}{{n\choose 2}}\le \frac{\mathrm{ex}(n,H)}{{n\choose 2}} \le \frac{\mathrm{ex}(n,K_s^r)}{{n\choose 2}}=\frac{r-2}{r-1}+o(1)
:*for <math>i=1,2,\ldots,n</math>, upon receiving <math>x_i</math>:
</math>
::: for every <math>1\le j\le k</math>, evaluate <math>h_j(x_i)</math> and <math>CMS[j][h_j(x_i)]++</math>.
The theorem follows.
----
:'''Query resolution:''' Upon each query of an arbitrary <math>x\in\Omega</math>,
:* return <math>\hat{f}=\min_{1\le j\le k}CMS[j][h_j(x)]</math>.
}}
}}
== References ==
* van Lin and Wilson. ''A course in combinatorics.'' Cambridge Press. Chapter 4.
* Aigner and Ziegler. ''Proofs from THE BOOK, 4th Edition.'' Springer-Verlag. [[media:PFTB_chap36.pdf| Chapter 36]].
* Diestel. ''Graph Theory, 3rd Edition''. Springer-Verlag 2000. [[media:Diestel2ed_chap7.pdf|Chapter 7]].

Revision as of 09:01, 19 September 2018

Distinct Elements

Consider the following problem of counting distinct elements: Suppose that [math]\displaystyle{ \Omega }[/math] is a sufficiently large universe.

  • Input: a sequence of (not necessarily distinct) elements [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math];
  • Output: an estimation of the total number of distinct elements [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math].

A straightforward way of solving this problem is to maintain a dictionary data structure, which costs at least linear ([math]\displaystyle{ O(n) }[/math]) space. For big data, where [math]\displaystyle{ n }[/math] is very large, this is still too expensive. However, due to an information-theoretical argument, linear space is necessary if you want to compute the exact value of [math]\displaystyle{ z }[/math].

Our goal is to relax the problem a little bit to significantly reduce the space cost by tolerating approximate answers. The form of approximation we consider is [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator.

[math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator
A random variable [math]\displaystyle{ \widehat{Z} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of a quantity [math]\displaystyle{ z }[/math] if
[math]\displaystyle{ \Pr[\,(1-\epsilon)z\le \widehat{Z}\le (1+\epsilon)z\,]\ge 1-\delta }[/math].
[math]\displaystyle{ \widehat{Z} }[/math] is said to be an unbiased estimator of [math]\displaystyle{ z }[/math] if [math]\displaystyle{ \mathbb{E}[\widehat{Z}]=z }[/math].

Usually [math]\displaystyle{ \epsilon }[/math] is called approximation error and [math]\displaystyle{ \delta }[/math] is called confidence error.

We now present an elegant algorithm introduced by Flajolet and Martin in 1984. The algorithm can be implemented in data stream model: The input elements [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] is presented to the algorithm one at a time, where the size of data [math]\displaystyle{ n }[/math] is unknown to the algorithm. The algorithm maintains a value [math]\displaystyle{ \widehat{Z} }[/math] which is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the total number of distinct elements [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math], using only a small amount of memory space to memorize (with loss) the data set [math]\displaystyle{ \{x_1,x_2,\ldots,x_n\} }[/math].

A famous quotation of Flajolet describes the performance of this algorithm as:

"Using only memory equivalent to 5 lines of printed text, you can estimate with a typical accuracy of 5% and in a single pass the total vocabulary of Shakespeare."

An estimator by hashing

Suppose that we can access to an idealized random hash function [math]\displaystyle{ h:\Omega\to[0,1] }[/math] which is uniformly distributed over all mappings from the universe [math]\displaystyle{ \Omega }[/math] to unit interval [math]\displaystyle{ [0,1] }[/math].

Recall that the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] consists of [math]\displaystyle{ z=|\{x_1,x_2,\ldots,x_n\}| }[/math] distinct elements. These elements are mapped by the random function [math]\displaystyle{ h }[/math] to [math]\displaystyle{ z }[/math] hash values uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math]. We could maintain these hash values instead of the original elements, but this would still be too expensive because in the worst case we still have up to [math]\displaystyle{ n }[/math] distinct values to maintain. However, due to the idealized random hash function, the unit interval [math]\displaystyle{ [0,1] }[/math] will be partitioned into [math]\displaystyle{ z+1 }[/math] subintervals by these [math]\displaystyle{ z }[/math] uniform and independent hash values. The typical length of the subinterval gives an estimation of the number [math]\displaystyle{ z }[/math].

Proposition
[math]\displaystyle{ \mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\frac{1}{z+1} }[/math].
Proof.

The input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] consisting of [math]\displaystyle{ z }[/math] distinct elements are mapped to [math]\displaystyle{ z }[/math] random hash values uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math]. These [math]\displaystyle{ z }[/math] hash values partition the unit interval [math]\displaystyle{ [0,1] }[/math] into [math]\displaystyle{ z+1 }[/math] subintervals [math]\displaystyle{ [0,v_1],[v_1,v_2],[v_2,v_3]\ldots,[v_{z-1},v_z],[v_z,1] }[/math], where [math]\displaystyle{ v_i }[/math] denotes the [math]\displaystyle{ i }[/math]-th smallest value among all hash values [math]\displaystyle{ \{h(x_1),h(x_2),\ldots,h(x_n)\} }[/math]. Clearly we have

[math]\displaystyle{ v_1=\min_{1\le i\le n}h(x_i) }[/math].

Meanwhile, since all hash values are uniformly and independently distributed in [math]\displaystyle{ [0,1] }[/math], the lengths of all subintervals [math]\displaystyle{ v_1, v_2-v_1, v_3-v_2,\ldots, v_z-v_{z-1}, 1-v_z }[/math] are identically distributed. By symmetry, they have the same expectation, therefore

[math]\displaystyle{ (z+1)\mathbb{E}[v_1]= \mathbb{E}[v_1]+\sum_{i=1}^{z-1}\mathbb{E}[v_{i+1}-v_i]+\mathbb{E}[1-v_z] =\mathbb{E}\left[v_1+(v_2-v_1)+(v_3-v_2)+\cdots+(v_{z}-v_{z-1})+1-v_z\right] =1, }[/math]

which implies that

[math]\displaystyle{ \mathbb{E}\left[\min_{1\le i\le n}h(x_i)\right]=\mathbb{E}[v_1]=\frac{1}{z+1} }[/math].
[math]\displaystyle{ \square }[/math]

The quantity [math]\displaystyle{ \min_{1\le i\le n}h(x_i) }[/math] can be computed with small space cost (for storing the current smallest hash value) by scan the input sequence in a single pass. Because as we proved its expectation is [math]\displaystyle{ \frac{1}{z+1} }[/math], the smallest hash value [math]\displaystyle{ Y=\min_{1\le i\le n}h(x_i) }[/math] gives an unbiased estimator for [math]\displaystyle{ \frac{1}{z+1} }[/math]. However, [math]\displaystyle{ \frac{1}{Y}-1 }[/math] is not necessarily a good estimator for [math]\displaystyle{ z }[/math]. Actually, it is a rather poor estimator. Consider for example when [math]\displaystyle{ z=1 }[/math], all input elements are the same. In this case, there is only one hash value and [math]\displaystyle{ Y=\min_{1\le i\le n}h(x_i) }[/math] is distributed uniformly over [math]\displaystyle{ [0,1] }[/math], thus [math]\displaystyle{ \frac{1}{Y}-1 }[/math] fails to be close enough to the correct answer 1 with high probability.

Flajolet-Martin algorithm

The reason that the above estimator of a single hash function performs poorly is that the unbiased estimator [math]\displaystyle{ \min_{1\le i\le n}h(x_i) }[/math] has large variance. So a natural way to reduce this variance is to have multiple independent hash functions and take the average. This is precisely what Flajolet-Martin algorithm does.

Suppose that we can access to [math]\displaystyle{ k }[/math] independent random hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math], where each [math]\displaystyle{ h_j:\Omega\to[0,1] }[/math] is uniformly and independently distributed over all functions mapping [math]\displaystyle{ \Omega }[/math] to [math]\displaystyle{ [0,1] }[/math]. Here [math]\displaystyle{ k }[/math] is a parameter to be fixed by the desired approximation error [math]\displaystyle{ \epsilon }[/math] and confidence error [math]\displaystyle{ \delta }[/math]. The Flajolet-Martin algorithm is given by the following pseudocode.

Flajolet-Martin algorithm (Flajolet and Martin 1984)
Suppose that [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[0,1] }[/math] are [math]\displaystyle{ k }[/math] uniform and independent random hash functions, where [math]\displaystyle{ k }[/math] is a parameter to be fixed later.

Scan the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math] in a single pass to compute:
  • [math]\displaystyle{ Y_j=\min_{1\le i\le n}h_j(x_i) }[/math] for every [math]\displaystyle{ j=1,2,\ldots,k }[/math];
  • average value [math]\displaystyle{ \overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j }[/math];
return [math]\displaystyle{ \widehat{Z}=\frac{1}{\overline{Y}}-1 }[/math] as the estimator.

The algorithm is easy to implement in data stream model, with a space cost of storing [math]\displaystyle{ k }[/math] hash values. The following theorem guarantees that the algorithm returns an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the total number of distinct elements for a suitable [math]\displaystyle{ k=O\left(\frac{1}{\epsilon^2\delta}\right) }[/math].

Theorem
For any [math]\displaystyle{ \epsilon,\delta\lt 1/2 }[/math], if [math]\displaystyle{ k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil }[/math] then the output [math]\displaystyle{ \widehat{Z} }[/math] always gives an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of the correct answer [math]\displaystyle{ z }[/math].

In the following we prove this main theorem.

An obstacle to analyze the estimator [math]\displaystyle{ \widehat{Z}=\frac{1}{\overline{Y}}-1 }[/math] is that it is a nonlinear function of [math]\displaystyle{ \overline{Y} }[/math] who is easier to analyze. Nevertheless, we observe that [math]\displaystyle{ \widehat{Z} }[/math] is an [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator of [math]\displaystyle{ z }[/math] as long as [math]\displaystyle{ \overline{Y} }[/math] is an [math]\displaystyle{ (\epsilon/2,\delta) }[/math]-estimator of [math]\displaystyle{ \frac{1}{z+1} }[/math]. This can be deduced by just verifying the following:

[math]\displaystyle{ \frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1} \implies (1-\epsilon)z\le\frac{1}{\overline{Y}}-1\le (1+\epsilon)z }[/math],

for [math]\displaystyle{ \epsilon\lt \frac{1}{2} }[/math]. Therefore,

[math]\displaystyle{ \Pr\left[\,(1-\epsilon)z\le \widehat{Z} \le (1+\epsilon)z\,\right]\ge \Pr\left[\,\frac{1-\epsilon/2}{z+1}\le \overline{Y}\le \frac{1+\epsilon/2}{z+1}\,\right] =\Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right] }[/math].

It is then sufficient to show that [math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\frac{1}{z+1}\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta }[/math] for proving the main theorem above. We will see that this is equivalent to show the concentration inequality

[math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\le \frac{\epsilon/2}{z+1}\,\right]\ge 1-\delta\quad\qquad({\color{red}*}) }[/math].
Lemma
The followings hold for each [math]\displaystyle{ Y_j }[/math], [math]\displaystyle{ j=1,2\ldots,k }[/math], and [math]\displaystyle{ \overline{Y}=\frac{1}{k}\sum_{j=1}^kY_j }[/math]:
  • [math]\displaystyle{ \mathbb{E}\left[\overline{Y}\right]=\mathbb{E}\left[Y_j\right]=\frac{1}{z+1} }[/math];
  • [math]\displaystyle{ \mathbf{Var}\left[Y_j\right]\le\frac{1}{(z+1)^2} }[/math], and consequently [math]\displaystyle{ \mathbf{Var}\left[\overline{Y}\right]\le\frac{1}{k(z+1)^2} }[/math].
Proof.

As in the case of single hash function, by symmetry it holds that [math]\displaystyle{ \mathbb{E}[Y_j]=\frac{1}{z+1} }[/math] for every [math]\displaystyle{ j=1,2,\ldots,k }[/math]. Therefore,

[math]\displaystyle{ \mathbb{E}\left[\overline{Y}\right]=\frac{1}{k}\sum_{j=1}^k\mathbb{E}[Y_j]=\frac{1}{z+1} }[/math].

Recall that each [math]\displaystyle{ Y_j }[/math] is the minimum of [math]\displaystyle{ z }[/math] random hash values uniformly and independently distributed over [math]\displaystyle{ [0,1] }[/math]. By geometry probability, it holds that for any [math]\displaystyle{ y\in[0,1] }[/math],

[math]\displaystyle{ \Pr[Y_j\gt y]=(1-y)^z }[/math],

which means [math]\displaystyle{ \Pr[Y_j\le y]=1-(1-y)^z }[/math]. Taking the derivative with respect to [math]\displaystyle{ y }[/math], we obtain the probability density function of random variable [math]\displaystyle{ Y_j }[/math], which is [math]\displaystyle{ z(1-y)^{z-1} }[/math].

We then compute the second moment.

[math]\displaystyle{ \mathbb{E}[Y_j^2]=\int^{1}_0y^2z(1-y)^{z-1}\,\mathrm{d}y=\frac{2}{(z+1)(z+2)} }[/math].

The variance is bounded as

[math]\displaystyle{ \mathbf{Var}\left[Y_j\right]=\mathbb{E}\left[Y_j^2\right]-\mathbb{E}\left[Y_j\right]^2=\frac{2}{(z+1)(z+2)}-\frac{1}{(z+1)^2}\le\frac{1}{(z+1)^2} }[/math].

Due to the (pairwise) independence between [math]\displaystyle{ Y_j }[/math]'s,

[math]\displaystyle{ \mathbf{Var}\left[\overline{Y}\right]=\mathbf{Var}\left[\frac{1}{k}\sum_{j=1}^kY_j\right]=\frac{1}{k^2}\sum_{j=1}^k\mathbf{Var}\left[Y_j\right]\le \frac{1}{k(z+1)^2} }[/math].
[math]\displaystyle{ \square }[/math]

We resume to prove the inequality [math]\displaystyle{ ({\color{red}*}) }[/math]. By Chebyshev's inequality, it holds that

[math]\displaystyle{ \Pr\left[\,\left|\overline{Y}-\mathbb{E}\left[\overline{Y}\right]\right|\gt \frac{\epsilon/2}{z+1}\,\right] \le\frac{4}{\epsilon^2}(z+1)^2\mathbf{Var}\left[\overline{Y}\right] \le\frac{4}{\epsilon^2k} }[/math].

When [math]\displaystyle{ k\ge\left\lceil\frac{4}{\epsilon^2\delta}\right\rceil }[/math], this probability is at most [math]\displaystyle{ \delta }[/math]. The inequality [math]\displaystyle{ ({\color{red}*}) }[/math] is proved. As we discussed above, this proves the main theorem.

Uniform Hash Assumption (UHA)

In above we assume we can access to idealized random hash functions [math]\displaystyle{ h:\Omega\to[0,1] }[/math] with real values. With a more careful calculation, one can show the same performance guarantee for hash functions with discrete values as [math]\displaystyle{ h:\Omega\to[M] }[/math] where [math]\displaystyle{ M=\mathrm{poly}(n) }[/math], that is, the hash values are strings of [math]\displaystyle{ O(\log n) }[/math] bits.

Even with such improved analysis, a uniform random discrete function in form of [math]\displaystyle{ h:[N]\to[M] }[/math] is not really efficient to store or to compute. By an information-theretical argument, it takes at least [math]\displaystyle{ \Omega(N\log M) }[/math] bits to represent such a random hash function because this is the entropy of such uniform random function.

For the convenience of analysis, it is common to assume the following Uniform Hash Assumption (UHA) also known as Simple Uniform Hash Assumption (SUHA).

Uniform Hash Assumption (UHA)
A uniform random function [math]\displaystyle{ h:[N]\rightarrow[M] }[/math] is available and the computation of [math]\displaystyle{ h }[/math] is efficient.

Set Membership

A basic question in Computer Science is:

"[math]\displaystyle{ \mbox{Is }x\in S? }[/math]"

for a set [math]\displaystyle{ S }[/math] and an element [math]\displaystyle{ x }[/math]. This is the set membership problem.

Formally, given an arbitrary set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math], we want to use a succinct data structure to represent this set [math]\displaystyle{ S }[/math], so that upon each query of any element [math]\displaystyle{ x }[/math] from the universe [math]\displaystyle{ [N] }[/math], the question of whether [math]\displaystyle{ x\in S }[/math] is efficiently answered. The complexity of such data structure is measured in two-fold:

  • space cost: size of the data structure to represent a set [math]\displaystyle{ S }[/math] of size [math]\displaystyle{ n }[/math];
  • time cost: time complexity of answering each query by accessing to the data structure.

Suppose that the universe [math]\displaystyle{ \Omega }[/math] is of size [math]\displaystyle{ N }[/math]. Clearly, the membership problem can be solved by a dictionary data structure, e.g.:

  • sorted table / balanced search tree: with space cost [math]\displaystyle{ O(n\log N) }[/math] bits and time cost [math]\displaystyle{ O(\log n) }[/math];
  • perfect hashing of Fredman, Komlós & Szemerédi: with space cost [math]\displaystyle{ O(n\log N) }[/math] bits and time cost [math]\displaystyle{ O(1) }[/math].

Note that [math]\displaystyle{ \log{N\choose n}=\Theta\left(n\log \frac{N}{n}\right) }[/math] is the entropy of sets [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math] of size [math]\displaystyle{ N }[/math]. Therefore it is necessary to use so many bits to represent a set without losing any information. Nevertheless, we can do better than this if we use a loss representation of the input set [math]\displaystyle{ S }[/math] and tolerate a bounded error in answering queries. Such lossy representation of data is sometimes called a sketch.

Bloom filter

The Bloom filter is a space-efficient hash table that solves the approximate membership problem with one-sided error (false positive).

Given a set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] elements from a universe [math]\displaystyle{ \Omega }[/math], a Bloom filter consists of an array [math]\displaystyle{ A }[/math] of [math]\displaystyle{ cn }[/math] bits, and [math]\displaystyle{ k }[/math] hash functions [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math] map [math]\displaystyle{ \Omega }[/math] to [math]\displaystyle{ [cn] }[/math], where both [math]\displaystyle{ c }[/math] and [math]\displaystyle{ k }[/math] are parameters that we can try to optimize later.

As before, we assume the Uniform Hash Assumption (UHA): [math]\displaystyle{ h_1,h_2,\ldots,h_k }[/math] are mutually independent hash function where each [math]\displaystyle{ h_i }[/math] is a uniform random hash function [math]\displaystyle{ h_i:\Omega\to[cn] }[/math].

The Bloom filter works as follows:

Bloom filter
Suppose [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[cn] }[/math] are uniform and independent random hash functions.

Data structure construction: Given a set [math]\displaystyle{ S\subset\Omega }[/math] of size [math]\displaystyle{ n=|S| }[/math], the data structure is a Boolean array [math]\displaystyle{ A }[/math] of [math]\displaystyle{ cn }[/math] bits constructed as
  • initialize all [math]\displaystyle{ cn }[/math] bits of the Boolean array [math]\displaystyle{ A }[/math] to 0;
  • for each [math]\displaystyle{ x\in S }[/math], let [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math].

Query resolution: Upon each query of an arbitrary [math]\displaystyle{ x\in\Omega }[/math],
  • answer "yes" if [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math] and "no" if otherwise.

The Boolean array is our data structure, whose size is [math]\displaystyle{ cn }[/math] bits. With Uniform Hash Assumption (UHA), the time cost of the data structure for answering each query is [math]\displaystyle{ O(k) }[/math].

When the answer returned by the algorithm is "no", it holds that [math]\displaystyle{ A[h_i(x)]=0 }[/math] for some [math]\displaystyle{ 1\le i\le k }[/math], in which case the query [math]\displaystyle{ x }[/math] must not belong to the set [math]\displaystyle{ S }[/math]. Thus, the Bloom filter has no false negatives.

On the other hand, when the answer returned by the algorithm is "yes", [math]\displaystyle{ A[h_i(x)]=1 }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math]. It is still possible for some [math]\displaystyle{ x\not\in S }[/math] that all bits [math]\displaystyle{ A[h_i(x)] }[/math] are set by elements in [math]\displaystyle{ S }[/math]. We want to bound such false positive, that is, the following probability for an [math]\displaystyle{ x\not\in S }[/math]:

[math]\displaystyle{ \Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,] }[/math],

which by independence between different hash functions and by symmetry is equal to:

[math]\displaystyle{ \Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k }[/math].

For an element [math]\displaystyle{ x\not\in S }[/math], its hash value [math]\displaystyle{ h_1(x) }[/math] is independent of all hash values [math]\displaystyle{ h_i(y) }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math] and all [math]\displaystyle{ y\in S }[/math]. This is due to the Uniform Hash Assumption. The hash value [math]\displaystyle{ h_1(x) }[/math] of [math]\displaystyle{ x\not\in S }[/math] is then independent of the content of the array [math]\displaystyle{ A }[/math]. Therefore, the probability of this position [math]\displaystyle{ A[h_1(x)] }[/math] missed by all [math]\displaystyle{ kn }[/math] updates to the Boolean array [math]\displaystyle{ A }[/math] caused by all [math]\displaystyle{ n }[/math] elements in [math]\displaystyle{ S }[/math] is:

[math]\displaystyle{ \Pr[\, A[h_1(x)]=0\,]=\left(1-\frac{1}{cn}\right)^{kn}\approx e^{-k/c}. }[/math]

Putting everything together, for any [math]\displaystyle{ x\not\in S }[/math], the false positive is bounded as:

[math]\displaystyle{ \begin{align} \Pr[\,\text{wrongly answer ''yes''}\,] &=\Pr[\,\forall 1\le i\le k, A[h_i(x)]=1\,]\\ &=\Pr[\, A[h_1(x)]=1\,]^k=(1-\Pr[\, A[h_1(x)]=0\,])^k\\ &=\left(1-\left(1-\frac{1}{cn}\right)^{kn}\right)^k\\ &\approx \left(1- e^{-k/c}\right)^k \end{align} }[/math]

which is [math]\displaystyle{ (0.6185)^c }[/math] when [math]\displaystyle{ k=c\ln 2 }[/math].

Bloom filter solves the membership query with a small constant error of false positives using a data structure of [math]\displaystyle{ O(n) }[/math] bits which answers each query with [math]\displaystyle{ O(1) }[/math] time cost.

Frequency Estimation

Suppose that [math]\displaystyle{ \Omega }[/math] is the data universe. The frequency estimation problem is defined as follows.

  • Data: a sequence of (not necessarily distinct) elements [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math];
  • Query: an element [math]\displaystyle{ x\in\Omega }[/math];
  • Output: an estimation [math]\displaystyle{ \hat{f}_x }[/math] of the frequency [math]\displaystyle{ f_x\triangleq|\{i\mid x_i=x\}| }[/math] of [math]\displaystyle{ x }[/math] in input data.

We still want to give an algorithm in the data stream model: the algorithm scan the input sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n }[/math] to construct a succinct data structure, such that upon each query of [math]\displaystyle{ x\in\Omega }[/math], the algorithm returns an estimation of the frequency [math]\displaystyle{ f_x }[/math].

Clearly this problem can always be solved by storing all appeared distinct elements along with their frequencies. However, the space cost of this straightforward solution is rather high. Instead, we want to use a lossy representation (a sketch) of input data which uses significantly less space but can still answer queries with tolarable accuracy.

Formally, upon each query of [math]\displaystyle{ x\in\Omega }[/math], the algorithm should return an answer [math]\displaystyle{ \hat{f}_x }[/math] satisfying:

[math]\displaystyle{ \Pr\left[\,\left|\hat{f}_x-f_x\right|\le \epsilon n\,\right]\ge 1-\delta }[/math].

Note that this notion of approximation is with bounded additive error which is weaker than the notion of [math]\displaystyle{ (\epsilon,\delta) }[/math]-estimator, whose error bound is multiplicative.

With such weak accuracy guarantee, its is possible to give a succinct data structure whose size is determined only by the error bounds [math]\displaystyle{ \epsilon }[/math] and [math]\displaystyle{ \delta }[/math] but independent of [math]\displaystyle{ n }[/math], because only the frequencies of those heavy hitters (elements [math]\displaystyle{ x }[/math] with high frequencies [math]\displaystyle{ f_x\gt \epsilon n }[/math]) need to be memorized, and there are at most [math]\displaystyle{ 1/\epsilon }[/math] many such heavy hitters.

Count-min sketch

Count-min sketch
Suppose [math]\displaystyle{ h_1,h_2,\ldots,h_k:\Omega\to[m] }[/math] are uniform and independent random hash functions.

Data structure construction: Given a sequence [math]\displaystyle{ x_1,x_2,\ldots,x_n\in\Omega }[/math], the data structure is a two-dimensional [math]\displaystyle{ k\times m }[/math] integer array [math]\displaystyle{ CMS[k][m] }[/math] constructed as
  • initialize all entries of [math]\displaystyle{ CMS[k][m] }[/math] to 0.
  • for [math]\displaystyle{ i=1,2,\ldots,n }[/math], upon receiving [math]\displaystyle{ x_i }[/math]:
for every [math]\displaystyle{ 1\le j\le k }[/math], evaluate [math]\displaystyle{ h_j(x_i) }[/math] and [math]\displaystyle{ CMS[j][h_j(x_i)]++ }[/math].

Query resolution: Upon each query of an arbitrary [math]\displaystyle{ x\in\Omega }[/math],
  • return [math]\displaystyle{ \hat{f}=\min_{1\le j\le k}CMS[j][h_j(x)] }[/math].