随机算法 (Spring 2013)/Universal Hashing

From EtoneWiki
Jump to: navigation, search

-wise independence

Recall the definition of independence between events:

Definition (Independent events)
Events are mutually independent if, for any subset ,

Similarly, we can define independence between random variables:

Definition (Independent variables)
Random variables are mutually independent if, for any subset and any values , where ,

Mutual independence is an ideal condition of independence. The limited notion of independence is usually defined by the k-wise independence.

Definition (k-wise Independenc)
1. Events are k-wise independent if, for any subset with
2. Random variables are k-wise independent if, for any subset with and any values , where ,

A very common case is pairwise independence, i.e. the 2-wise independence.

Definition (pairwise Independent random variables)
Random variables are pairwise independent if, for any where and any values

Note that the definition of k-wise independence is hereditary:

  • If are k-wise independent, then they are also -wise independent for any .
  • If are NOT k-wise independent, then they cannot be -wise independent for any .

Pairwise Independent Bits

Suppose we have mutually independent and uniform random bits . We are going to extract pairwise independent bits from these mutually independent bits.

Enumerate all the nonempty subsets of in some order. Let be the th subset. Let

where is the exclusive-or, whose truth table is as follows.

0 0 0
0 1 1
1 0 1
1 1 0

There are such , because there are nonempty subsets of . An equivalent definition of is

.

Sometimes, is called the parity of the bits in .

We claim that are pairwise independent and uniform.

Theorem
For any and any ,
For any that and any ,

The proof is left for your exercise.

Therefore, we extract exponentially many pairwise independent uniform random bits from a sequence of mutually independent uniform random bits.

Note that are not 3-wise independent. For example, consider the subsets and the corresponding random bits . Any two of would decide the value of the third one.

Pairwise Independent Variables

We now consider constructing pairwise independent random variables ranging over for some prime . Unlike the above construction, now we only need two independent random sources , which are uniformly and independently distributed over .

Let be defined as:

Theorem
The random variables are pairwise independent uniform random variables over .
Proof.
We first show that are uniform. That is, we will show that for any ,

Due to the law of total probability,

For prime , for any , there is exact one value in of satisfying . Thus, and the above probability is .

We then show that are pairwise independent, i.e. we will show that for any that and any ,

The event is equivalent to that

Due to the Chinese remainder theorem, there exists a unique solution of and in to the above linear congruential system. Thus the probability of the event is .

Tools for limited independence

Let be random variables. The variance of their sum is

If are pairwise independent, then for any since the covariance of a pair of independent random variables is 0. This gives us the following theorem of linearity of variance for pairwise independent random variables.

Theorem
For pairwise independent random variables ,

The theorem relies on that the covariances of pairwise independent random variables are 0, which in turn is actually a consequence of a more general theorem.

Theorem (-wise independence fools -degree polynomials)
Let be mutually independent random variables and be -wise independent random variables, with that the marginal distribution of is identical to the marginal distribution of , , that is, for any , .
Let be a multivariate polynomial of degree at most . Then

This phenomenon is sometimes called that the -degree polynomials are fooled by -wise independence. In other words, a -degree polynomial behaves the same on the -wise independent random variables as on the mutual independent random variables.

This theorem is implied by the following lemma.

Lemma
Let be mutually independent random variables. Then

The lemma can be proved by directly compute the expectation. We omit the detailed proof.

By the linearity of expectation, the expectation of a polynomial is reduced to the sum of the expectations of terms. For a k-degree polynomial, each term has at most variables. Due to the above lemma, with k-wise independence, the expectation of each term behaves exactly the same as mutual independence.

Since the th moment is the expectation of a k-degree polynomial of random variables, the tools based on the th moment can be safely used for the k-wise independence. In particular, Chebyshev's inequality for pairwise independent random variables:

Chebyshev's inequality
Let , where are pairwise independent Poisson trials. Let .
Then

Application: Derandomizing MAX-CUT

Let be an undirected graph, and be a vertex set. The cut defined by is .

Given as input an undirected graph , find the whose cut value is maximized. This problem is called the maximum cut (MAX-CUT) problem, which is NP-hard. The decision version of one of the weighted version of the problem is one of the Karp's 21 NP-complete problems. The problem has a -approximation algorithm by rounding a semidefinite programming. Assuming that the unique game conjecture (UGC), there does not exist a poly-time algorithm with better approximation ratio unless .

Here we give a very simple -approximation algorithm. The "algorithm" has a one-line description:

  • Put each into independently with probability 1/2.

We then analyze the approximation ratio of this algorithm.

For each , let indicate whether , that is

For each edge , let indicate whether contribute to the cut , i.e. whether or , that is

Then . Due to the linearity of expectation,

.

The maximum cut of a graph is at most . Thus, the algorithm returns in expectation a cut with size at least half of the maximum cut.

We then show how to dereandomize this algorithm by pairwise independent bits.

Suppose that and enumerate the vertices by in an arbitrary order. Let . Sample bits uniformly and independently at random. Enumerate all nonempty subsets of by . For each vertex , let . The MAX-CUT algorithm uses these bits to construct the solution :

  • For , put into if .

We have shown that , , are uniform and pairwise independent. Thus we still have that . The above analysis still holds, so that the algorithm returns in expectation a cut with size at least .

Finally, we notice that there are only total random bits in the new algorithm. We can enumerate all possible strings of bits, run the above algorithm with the bit strings as the "random sources", and output the maximum cut returned. There must exist a bit string on which the algorithm returns a cut of size (why?). This gives us a deterministic polynomial time (actually time) -approximation algorithm.

Application: Two-point sampling

Consider a Monte Carlo randomized algorithm with one-sided error for a decision problem . We formulate the algorithm as a deterministic algorithm that takes as input and a uniform random number where is a prime, such that for any input :

  • If , then , where the probability is taken over the random choice of .
  • If , then for any .

We call the random source for the algorithm.

For the that , we call the that makes a witness for . For a positive , at least half of are witnesses. The random source has polynomial number of bits, which means that is exponentially large, thus it is infeasible to find the witness for an input by exhaustive search. Deterministic overcomes this by having sophisticated deterministic rules for efficiently searching for a witness. Randomization, on the other hard, reduce this to a bit of luck, by randomly choosing an and winning with a probability of 1/2.

We can boost the accuracy (equivalently, reduce the error) of any Monte Carlo randomized algorithm with one-sided error by running the algorithm for a number of times.

Suppose that we sample values uniformly and independently from , and run the following scheme:

return ;

That is, return 1 if any instance of . For any that , due to the independence of , the probability that returns an incorrect result is at most . On the other hand, never makes mistakes for the that since has no false positives. Thus, the error of the Monte Carlo algorithm is reduced to .

Sampling mutually independent random numbers from can be quite expensive since it requires random bits. Suppose that we can only afford random bits. In particular, we sample two independent uniform random number and from . If we use and directly bu running two independent instances and , we only get an error upper bound of 1/4.

The following scheme reduces the error significantly with the same number of random bits:

Algorithm

Choose two independent uniform random number and from . Construct random number by:

Run .

Due to the discussion in the last section, we know that for , are pairwise independent and uniform over . Let and . Due to the uniformity of and our definition of , for any that , it holds that

By the linearity of expectations,

Since is Bernoulli trial with a probability of success at least . We can estimate the variance of each as follows.

Applying Chebyshev's inequality, we have that for any that ,

The error is reduced to with only two random numbers. This scheme works as long as .


Universal Hashing

Hashing is one of the oldest tools in Computer Science. Knuth's memorandum in 1963 on analysis of hash tables is now considered to be the birth of the area of analysis of algorithms.

  • Knuth. Notes on "open" addressing, July 22 1963. Unpublished memorandum.

The idea of hashing is simple: an unknown set of data items (or keys) are drawn from a large universe where ; in order to store in a table of entries (slots), we assume a consistent mapping (called a hash function) from the universe to a small range .

This idea seems clever: we use a consistent mapping to deal with an arbitrary unknown data set. However, there is a fundamental flaw for hashing.

  • For sufficiently large universe (), for any function, there exists a bad data set , such that all items in are mapped to the same entry in the table.

A simple use of pigeonhole principle can prove the above statement.

To overcome this situation, randomization is introduced into hashing. We assume that the hash function is a random mapping from to . In order to ease the analysis, the following ideal assumption is used:

Simple Uniform Hash Assumption (SUHA or UHA, a.k.a. the random oracle model):

A uniform random function is available and the computation of is efficient.

Families of universal hash functions

The assumption of completely random function simplifies the analysis. However, in practice, truly uniform random hash function is extremely expensive to compute and store. Thus, this simple assumption can hardly represent the reality.

There are two approaches for implementing practical hash functions. One is to use ad hoc implementations and wish they may work. The other approach is to construct class of hash functions which are efficient to compute and store but with weaker randomness guarantees, and then analyze the applications of hash functions based on this weaker assumption of randomness.

This route was took by Carter and Wegman in 1977 while they introduced universal families of hash functions.

Definition (universal hash families)
Let be a universe with . A family of hash functions from to is said to be -universal if, for any items and for a hash function chosen uniformly at random from , we have
A family of hash functions from to is said to be strongly -universal if, for any items , any values , and for a hash function chosen uniformly at random from , we have

In particular, for a 2-universal family , for any elements , a uniform random has

For a strongly 2-universal family , for any elements and any values , a uniform random has

This behavior is exactly the same as uniform random hash functions on any pair of inputs. For this reason, a strongly 2-universal hash family are also called pairwise independent hash functions.

2-universal hash families

The construction of pairwise independent random variables via modulo a prime introduced in Section 1 already provides a way of constructing a strongly 2-universal hash family.

Let be a prime. The function is defined by

and the family is

Lemma
is strongly 2-universal.
Proof.
In Section 1, we have proved the pairwise independence of the sequence of , for , which directly implies that is strongly 2-universal.
The original construction of Carter-Wegman

What if we want to have hash functions from to for non-prime and ? Carter and Wegman developed the following method.

Suppose that the universe is , and the functions map to , where . For some prime , let

and the family

Note that unlike the first construction, now .

Lemma (Carter-Wegman)
is 2-universal.
Proof.
Due to the definition of , there are many different hash functions in , because each hash function in corresponds to a pair of and . We only need to count for any particular pair of that , the number of hash functions that .

We first note that for any , . This is because would imply that , which can never happen since and (note that for an ). Therefore, we can assume that and for .

Due to the Chinese remainder theorem, for any that , for any that , there is exact one solution to satisfying:

After modulo , every has at most many that but . Therefore, for every pair of that , there exist at most pairs of and such that , which means there are at most many hash functions having for . For uniformly chosen from , for any ,

We prove that is 2-universal.

A construction used in practice

The main issue of Carter-Wegman construction is the efficiency. The mod operation is very slow, and has been so for more than 30 years.

The following construction is due to Dietzfelbinger et al. It was published in 1997 and has been practically used in various applications of universal hashing.

The family of hash functions is from to . With a binary representation, the functions map binary strings of length to binary strings of length . Let

and the family

This family of hash functions does not exactly meet the requirement of 2-universal family. However, Dietzfelbinger et al proved that is close to a 2-universal family. Specifically, for any input values , for a uniformly random ,

So is within an approximation ratio of 2 to being 2-universal. The proof uses the fact that odd numbers are relative prime to a power of 2.

The function is extremely simple to compute in c language. We exploit that C-multiplication (*) of unsigned u-bit numbers is done , and have a one-line C-code for computing the hash function:

h_a(x) = (a*x)>>(u-v)

The bit-wise shifting is a lot faster than modular. It explains the popularity of this scheme in practice than the original Carter-Wegman construction.

Collision number

Consider a 2-universal family of hash functions from to . Let be a hash function chosen uniformly from . For a fixed set of distinct elements from , say , the elements are mapped to the hash values . This can be seen as throwing balls to bins, with pairwise independent choices of bins.

As in the balls-into-bins with full independence, we are curious about the questions such as the birthday problem or the maximum load. These questions are interesting not only because they are natural to ask in a balls-into-bins setting, but in the context of hashing, they are closely related to the performance of hash functions.

The old techniques for analyzing balls-into-bins rely too much on the independence of the choice of the bin for each ball, therefore can hardly be extended to the setting of 2-universal hash families. However, it turns out several balls-into-bins questions can somehow be answered by analyzing a very natural quantity: the number of collision pairs.

A collision pair for hashing is a pair of elements which are mapped to the same hash value, i.e. . Formally, for a fixed set of elements , for any , let the random variable

The total number of collision pairs among the items is

Since is 2-universal, for any ,

The expected number of collision pairs is

In particular, for , i.e. items are mapped to hash values by a pairwise independent hash function, the expected collision number is .

Birthday problem

In the context of hash functions, the birthday problem ask for the probability that there is no collision at all. Since collision is something that we want to avoid in the applications of hash functions, we would like to lower bound the probability of zero-collision, i.e. to upper bound the probability that there exists a collision pair.

The above analysis gives us an estimation on the expected number of collision pairs, such that . Apply the Markov's inequality, for , we have

When , the number of collision pairs is with probability at most , therefore with probability at least , there is no collision at all. Therefore, we have the following theorem.

Theorem
If is chosen uniformly from a 2-universal family of hash functions mapping the universe to where , then for any set of items, where , the probability that there exits a collision pair is

Recall that for mutually independent choices of bins, for some , the probability that a collision occurs is about . For constant , this gives an essentially same bound as the pairwise independent setting. Therefore, the behavior of pairwise independent hash function is essentially the same as the uniform random hash function for the birthday problem. This is easy to understand, because birthday problem is about the behavior of collisions, and the definition of 2-universal hash function can be interpreted as "functions that the probability of collision is as low as a uniform random function".

Perfect Hashing

Perfect hashing is a data structure for storing a static dictionary. In a static dictionary, a set of items from the universe are preprocessed and stored in a table. Once the table is constructed, it will nit be changed any more, but will only be used for search operations: a search for an item gives the location of the item in the table or returns that the item is not in the table. You may think of an application that we store an encyclopedia in a DVD, so that searches are very efficient but there will be no updates to the data.

This problem can be solved by binary search on a sorted table or balanced search trees in time for a set of elements. We show how to solve this problem with time by perfect hashing.

Perfect hashing using quadratic space

The idea of perfect hashing is that we use a hash function to map the items to distinct entries of the table; store every item in the entry ; and also store the hash function in a fixed location in the table (usually the beginning of the table). The algorithm for searching for an item is as follows:

search for in table :
  1. retrieve from a fixed location in the table;
  2. if return ; else return NOT_FOUND;

This scheme works as long as that the hash function satisfies the following two conditions:

  • The description of is sufficiently short, so that can be stored in one entry (or in constant many entries) of the table.
  • has no collisions on , i.e. there is no pair of items that are mapped to the same value by .

The first condition is easy to guarantee for 2-universal hash families. As shown by Carter-Wegman construction, a 2-universal hash function can be uniquely represented by two integers and , which can be stored in two entries (or just one, if the word length is sufficiently large) of the table.

Our discussion is now focused on the second condition. We find that it relies on the perfectness of the hash function for a data set .

A hash function is perfect for a set of items if maps all items in to different values, i.e. there is no collision.

We have shown by the birthday problem for 2-universal hashing that when items are mapped to values, for an chosen uniformly from a 2-universal family of hash functions, the probability that a collision occurs is at most 1/2. Thus

for a table of entries.

The construction of perfect hashing is straightforward then:

For a set of elements:
  1. uniformly choose an from a 2-universal family ; (for Carter-Wegman's construction, it means uniformly choose two integer and for a sufficiently large prime .)
  2. check whether is perfect for ;
  3. if is NOT perfect for , start over again; otherwise, construct the table;

This is a Las Vegas randomized algorithm, which construct a perfect hashing for a fixed set with expectedly at most two trials (due to geometric distribution). The resulting data structure is a -size static dictionary of elements which answers every search in deterministic time.

FKS perfect hashing

In the last section we see how to use space and constant time for answering search in a set. Now we see how to do it with linear space and constant time. This solves the problem of searching asymptotically optimal for both time and space.

This was once seemingly impossible, until Yao's seminal paper:

  • Yao. Should tables be sorted? Journal of the ACM (JACM), 1981.

Yao's paper shows a possibility of achieving linear space and constant time at the same time by exploiting the power of hashing, but assumes an unrealistically large universe.

Inspired by Yao's work, Fredman, Komlós, and Szemerédi discover the first linear-space and constant-time static dictionary in a realistic setting:

  • Fredman, Komlós, and Szemerédi. Storing a sparse table with O(1) worst case access time. Journal of the ACM (JACM), 1984.

The idea of FKS hashing is to arrange hash table in two levels:

  • In the first level, items are hashed to buckets by a 2-universal hash function .
Let be the set of items hashed to the th bucket.
  • In the second level, construct a -size perfect hashing for each bucket .

The data structure can be stored in a table. The first few entries are reserved to store the primary hash function . To help the searching algorithm locate a bucket, we use the next entries of the table as the "pointers" to the bucket: each entry stores the address of the first entry of the space to store a bucket. In the rest of table, the buckets are stored in order, each using a space as required by perfect hashing.

FKS.png

It is easy to see that the search time is constant. To search for an item , the algorithm does the followings:

  • Retrieve .
  • Retrieve the address for bucket .
  • Search by perfect hashing within bucket .

Each line takes constant time. So the worst-case search time is constant.

We then need to guarantee that the space is linear to . At the first glance, this seems impossible because each instance of perfect hashing for a bucket costs a square-size of space. We will prove that although the individual buckets use square-sized spaces, the sum of the them is still linear.

For a fixed set of items, for a hash function chosen uniformly from a 2-universe family which maps the items to , called buckets, let be the number of items in mapped to the th bucket. We are going to bound the following quantity:

Since each bucket use a space of for perfect hashing. gives the size of the space for storing the buckets.

We will show that is related to the total number of collision pairs. (Indeed, the number of collision pairs can be computed by a degree-2 polynomial, just like .)

Note that a bucket of items contributes collision pairs. Let be the total number of collision pairs. can be computed by summing over the collision pairs in every bucket:

Therefore, the sum of squares of the sizes of buckets is related to collision number by:

By our analysis of the collision number, we know that for items mapped to buckets, the expected number of collision pairs is: . Thus,

Due to Markov's inequality, with a constant probability. For any set , we can find a suitable after expected constant number of trials, and FKS can be constructed with guaranteed (instead of expected) linear-size which answers each search in constant time.