高级算法 (Fall 2018)/Hashing and Sketching
Contents
Distinct Elements
Consider the following problem of counting distinct elements: Suppose that is a sufficiently large universe.
- Input: a sequence of (not necessarily distinct) elements ;
- Output: an estimation of the total number of distinct elements .
A straightforward way of solving this problem is to maintain a dictionary data structure, which costs at least linear () space. For big data, where is very large, this is still too expensive. However, due to an information-theoretical argument, linear space is necessary if you want to compute the exact value of .
Our goal is to relax the problem a little bit to significantly reduce the space cost by tolerating approximate answers. The form of approximation we consider is -estimator.
-estimator - A random variable is an -estimator of a quantity if
- .
- is said to be an unbiased estimator of if .
- A random variable is an -estimator of a quantity if
Usually is called approximation error and is called confidence error.
We now present an elegant algorithm introduced by Flajolet and Martin in 1984. The algorithm can be implemented in data stream model: The input elements is presented to the algorithm one at a time, where the size of data is unknown to the algorithm. The algorithm maintains a value which is an -estimator of the total number of distinct elements , using only a small amount of memory space to memorize (with loss) the data set .
A famous quotation of Flajolet describes the performance of this algorithm as:
"Using only memory equivalent to 5 lines of printed text, you can estimate with a typical accuracy of 5% and in a single pass the total vocabulary of Shakespeare."
An estimator by hashing
Suppose that we can access to an idealized random hash function which is uniformly distributed over all mappings from the universe to unit interval .
Recall that the input sequence consists of distinct elements. These elements are mapped by the random function to hash values uniformly and independently distributed in . We could maintain these hash values instead of the original elements, but this would still be too expensive because in the worst case we still have up to distinct values to maintain. However, due to the idealized random hash function, the unit interval will be partitioned into subintervals by these uniform and independent hash values. The typical length of the subinterval gives an estimation of the number .
Proposition - .
Proof. The input sequence consisting of distinct elements are mapped to random hash values uniformly and independently distributed in . These hash values partition the unit interval into subintervals , where denotes the -th smallest value among all hash values . Clearly we have
- .
Meanwhile, since all hash values are uniformly and independently distributed in , the lengths of all subintervals are identically distributed. By symmetry, they have the same expectation, therefore
which implies that
- .
The quantity can be computed with small space cost (for storing the current smallest hash value) by scan the input sequence in a single pass. Because as we proved its expectation is , the smallest hash value gives an unbiased estimator for . However, is not necessarily a good estimator for . Actually, it is a rather poor estimator. Consider for example when , all input elements are the same. In this case, there is only one hash value and is distributed uniformly over , thus fails to be close enough to the correct answer 1 with high probability.
Flajolet-Martin algorithm
The reason that the above estimator of a single hash function performs poorly is that the unbiased estimator has large variance. So a natural way to reduce this variance is to have multiple independent hash functions and take the average. This is precisely what Flajolet-Martin algorithm does.
Suppose that we can access to independent random hash functions , where each is uniformly and independently distributed over all functions mapping to . Here is a parameter to be fixed by the desired approximation error and confidence error . The Flajolet-Martin algorithm is given by the following pseudocode.
Flajolet-Martin algorithm (Flajolet and Martin 1984) - Suppose that are uniform and independent random hash functions, where is a parameter to be fixed later.
- Scan the input sequence in a single pass to compute:
- for every ;
- average value ;
- return as the estimator.
The algorithm is easy to implement in data stream model, with a space cost of storing hash values. The following theorem guarantees that the algorithm returns an -estimator of the total number of distinct elements for a suitable .
Theorem - For any , if then the output always gives an -estimator of the correct answer .
In the following we prove this main theorem for Flajolet-Martin algorithm.
An obstacle to analyze the estimator is that it is a nonlinear function of who is easier to analyze. Nevertheless, we observe that is an -estimator of as long as is an -estimator of . This can be deduced by just verifying the following:
- ,
for . Therefore,
- .
It is then sufficient to show that for proving the main theorem above. We will see that this is equivalent to show the concentration inequality
- .
Lemma - The followings hold for each , , and :
- ;
- , and consequently .
- The followings hold for each , , and :
Proof. As in the case of single hash function, by symmetry it holds that for every . Therefore,
- .
Recall that each is the minimum of random hash values uniformly and independently distributed over . By geometry probability, it holds that for any ,
- ,
which means . Taking the derivative with respect to , we obtain the probability density function of random variable , which is .
We then compute the second moment.
- .
The variance is bounded as
- .
Due to the (pairwise) independence between 's,
- .
We resume to prove the inequality . By Chebyshev's inequality, it holds that
- .
When , this probability is at most . The inequality is proved. As we discussed above, this proves the above main theorem for Flajolet-Martin algorithm.
Uniform Hash Assumption (UHA)
In above we assume we can access to idealized random hash functions with real values. With a more careful calculation, one can show the same performance guarantee for hash functions with discrete values as where , that is, the hash values are strings of bits.
Even with such improved analysis, a uniform random discrete function in form of is not really efficient to store or to compute. By an information-theretical argument, it takes at least bits to represent such a random hash function because this is the entropy of such uniform random function.
For the convenience of analysis, it is common to assume the following Uniform Hash Assumption (UHA) also known as Simple Uniform Hash Assumption (SUHA).
Uniform Hash Assumption (UHA) - A uniform random function is available and the computation of is efficient.
Set Membership
A basic question in Computer Science is:
- ""
for a set and an element . This is the set membership problem.
Formally, given an arbitrary set of elements from a universe , we want to use a succinct data structure to represent this set , so that upon each query of any element from the universe , the question of whether is efficiently answered. The complexity of such data structure is measured in two-fold:
- space cost: size of the data structure to represent a set of size ;
- time cost: time complexity of answering each query by accessing to the data structure.
Suppose that the universe is of size . Clearly, the membership problem can be solved by a dictionary data structure, e.g.:
- sorted table / balanced search tree: with space cost bits and time cost ;
- perfect hashing of Fredman, Komlós & Szemerédi: with space cost bits and time cost .
Note that is the entropy of sets of elements from a universe of size . Therefore it is necessary to use so many bits to represent a set without losing any information. Nevertheless, we can do better than this if we use a loss representation of the input set and tolerate a bounded error in answering queries. Such lossy representation of data is sometimes called a sketch.
Bloom filter
The Bloom filter is a space-efficient hash table that solves the approximate membership problem with one-sided error (false positive).
Given a set of elements from a universe , a Bloom filter consists of an array of bits, and hash functions map to , where both and are parameters that we can try to optimize later.
As before, we assume the Uniform Hash Assumption (UHA): are mutually independent hash function where each is a uniform random hash function .
The Bloom filter works as follows:
Bloom filter (Bloom 1970) - Suppose are uniform and independent random hash functions.
- Data structure construction: Given a set of size , the data structure is a Boolean array of bits constructed as
- initialize all bits of the Boolean array to 0;
- for each , let for all .
- Query resolution: Upon each query of an arbitrary ,
- answer "yes" if for all and "no" if otherwise.
The Boolean array is our data structure, whose size is bits. With Uniform Hash Assumption (UHA), the time cost of the data structure for answering each query is .
When the answer returned by the algorithm is "no", it holds that for some , in which case the query must not belong to the set . Thus, the Bloom filter has no false negatives.
On the other hand, when the answer returned by the algorithm is "yes", for all . It is still possible for some that all bits are set by elements in . We want to bound such false positive, that is, the following probability for an :
- ,
which by independence between different hash functions and by symmetry is equal to:
- .
For an element , its hash value is independent of all hash values for all and all . This is due to the Uniform Hash Assumption. The hash value of is then independent of the content of the array . Therefore, the probability of this position missed by all updates to the Boolean array caused by all elements in is:
Putting everything together, for any , the false positive is bounded as:
which is when .
Bloom filter solves the membership query with a small constant error of false positives using a data structure of bits which answers each query with time cost.
Frequency Estimation
Suppose that is the data universe. The frequency estimation problem is defined as follows.
- Data: a sequence of (not necessarily distinct) elements ;
- Query: an element ;
- Output: an estimation of the frequency of in input data.
We still want to give an algorithm in the data stream model: the algorithm scan the input sequence to construct a succinct data structure, such that upon each query of , the algorithm returns an estimation of the frequency .
Clearly this problem can always be solved by storing all appeared distinct elements along with their frequencies. However, the space cost of this straightforward solution is rather high. Instead, we want to use a lossy representation (a sketch) of input data which uses significantly less space but can still answer queries with tolarable accuracy.
Formally, upon each query of , the algorithm should return an answer satisfying:
- .
Note that this notion of approximation is with bounded additive error which is weaker than the notion of -estimator, whose error bound is multiplicative.
With such weak accuracy guarantee, its is possible to give a succinct data structure whose size is determined only by the error bounds and but independent of , because only the frequencies of those heavy hitters (elements with high frequencies ) need to be memorized, and there are at most many such heavy hitters.
Count-min sketch
The count-min sketch given by Cormode and Muthukrishnan is an elegant data structure for frequency estimation.
The data structure is a two-dimensional integer array, where and are two parameters to be determined by the error bounds and . We still adopt the Uniform Hash Assumption to assume that we have access to mutually independent uniform random hash functions .
Count-min sketch (Cormode and Muthukrishnan 2003) - Suppose are uniform and independent random hash functions.
- Data structure construction: Given a sequence , the data structure is a two-dimensional integer array constructed as
- initialize all entries of to 0;
- for , upon receiving :
- for every , evaluate and .
- Query resolution: Upon each query of an arbitrary ,
- return .
It is easy to see that the space cost of count-min sketch is memory words, or bits. Each query is answered within time cost , assuming that an evaluation of hash function can be done in unit or constant time. We then analyze the error bounds.
First, it is easy to observe that for any query and every hash function , it always holds for the corresponding entry in the count-min sketch
- ,
because the appearances of element in the input sequence contribute at least to the value of .
Therefore, for any query it always holds for the answer , which means
where the second equation is due to the mutual independence of random hash functions .
It remains to upper bound the probability , which can be done by calculating the expectation of .
Proposition - For any and every , it holds that .
Proof. The value of is constituted by the frequency of and the frequencies of all other elements among , thus
where denotes the Boolean random variable that indicates the occurrence of event .
By linearity of expectation,
- .
Due to Uniform Hash Assumption (UHA), is a uniform random function. For any , the probability of hash collision is
- .
Therefore,
where the last equation is due to the obvious identity .
The above proposition shows that for any and every
- .
Recall that always holds, thus is a positive random variable. By Markov's inequality, we have
- .
Combining with above equation , we have
- .
By setting and , the above error probability is bounded as .
For any positive and , the count-min sketch gives a data structure of size (in memory words) and answering each query in time with the following accuracy guarantee:
- .