随机算法 (Spring 2013)/Universal Hashing

From TCS Wiki
Revision as of 09:51, 24 May 2013 by imported>Etone
Jump to navigation Jump to search

k-wise independence

Recall the definition of independence between events:

Definition (Independent events)
Events E1,E2,,En are mutually independent if, for any subset I{1,2,,n},
Pr[iIEi]=iIPr[Ei].

Similarly, we can define independence between random variables:

Definition (Independent variables)
Random variables X1,X2,,Xn are mutually independent if, for any subset I{1,2,,n} and any values xi, where iI,
Pr[iI(Xi=xi)]=iIPr[Xi=xi].

Mutual independence is an ideal condition of independence. The limited notion of independence is usually defined by the k-wise independence.

Definition (k-wise Independenc)
1. Events E1,E2,,En are k-wise independent if, for any subset I{1,2,,n} with |I|k
Pr[iIEi]=iIPr[Ei].
2. Random variables X1,X2,,Xn are k-wise independent if, for any subset I{1,2,,n} with |I|k and any values xi, where iI,
Pr[iI(Xi=xi)]=iIPr[Xi=xi].

A very common case is pairwise independence, i.e. the 2-wise independence.

Definition (pairwise Independent random variables)
Random variables X1,X2,,Xn are pairwise independent if, for any Xi,Xj where ij and any values a,b
Pr[Xi=aXj=b]=Pr[Xi=a]Pr[Xj=b].

Note that the definition of k-wise independence is hereditary:

  • If X1,X2,,Xn are k-wise independent, then they are also -wise independent for any <k.
  • If X1,X2,,Xn are NOT k-wise independent, then they cannot be -wise independent for any >k.

Pairwise Independent Bits

Suppose we have m mutually independent and uniform random bits X1,,Xm. We are going to extract n=2m1 pairwise independent bits from these m mutually independent bits.

Enumerate all the nonempty subsets of {1,2,,m} in some order. Let Sj be the jth subset. Let

Yj=iSjXi,

where is the exclusive-or, whose truth table is as follows.

a b ab
0 0 0
0 1 1
1 0 1
1 1 0

There are n=2m1 such Yj, because there are 2m1 nonempty subsets of {1,2,,m}. An equivalent definition of Yj is

Yj=(iSjXi)mod2.

Sometimes, Yj is called the parity of the bits in Sj.

We claim that Yj are pairwise independent and uniform.

Theorem
For any Yj and any b{0,1},
Pr[Yj=b]=12.
For any Yj,Y that j and any a,b{0,1},
Pr[Yj=aY=b]=14.

The proof is left for your exercise.

Therefore, we extract exponentially many pairwise independent uniform random bits from a sequence of mutually independent uniform random bits.

Note that Yj are not 3-wise independent. For example, consider the subsets S1={1},S2={2},S3={1,2} and the corresponding random bits Y1,Y2,Y3. Any two of Y1,Y2,Y3 would decide the value of the third one.

Pairwise Independent Variables

We now consider constructing pairwise independent random variables ranging over [p]={0,1,2,,p1} for some prime p. Unlike the above construction, now we only need two independent random sources X0,X1, which are uniformly and independently distributed over [p].

Let Y0,Y1,,Yp1 be defined as:

Yi=(X0+iX1)modpfor i[p].
Theorem
The random variables Y0,Y1,,Yp1 are pairwise independent uniform random variables over [p].
Proof.
We first show that Yi are uniform. That is, we will show that for any i,a[p],
Pr[(X0+iX1)modp=a]=1p.

Due to the law of total probability,

Pr[(X0+iX1)modp=a]=j[p]Pr[X1=j]Pr[(X0+ij)modp=a]=1pj[p]Pr[X0(aij)(modp)].

For prime p, for any i,j,a[p], there is exact one value in [p] of X0 satisfying X0(aij)(modp). Thus, Pr[X0(aij)(modp)]=1/p and the above probability is 1p.

We then show that Yi are pairwise independent, i.e. we will show that for any Yi,Yj that ij and any a,b[p],

Pr[Yi=aYj=b]=1p2.

The event Yi=aYj=b is equivalent to that

{(X0+iX1)a(modp)(X0+jX1)b(modp)

Due to the Chinese remainder theorem, there exists a unique solution of X0 and X1 in [p] to the above linear congruential system. Thus the probability of the event is 1p2.

Tools for limited independence

Let X1,X2,,Xn be random variables. The variance of their sum is

Var[i=1nXi]=i=1nVar[Xi]+ijcov(Xi,Xj).

If X1,X2,,Xn are pairwise independent, then cov(Xi,Xj)=0 for any ij since the covariance of a pair of independent random variables is 0. This gives us the following theorem of linearity of variance for pairwise independent random variables.

Theorem
For pairwise independent random variables X1,X2,,Xn,
Var[i=1nXi]=i=1nVar[Xi].

The theorem relies on that the covariances of pairwise independent random variables are 0, which in turn is actually a consequence of a more general theorem.

Theorem (k-wise independence fools k-degree polynomials)
Let X1,X2,,Xn be mutually independent random variables and Y1,Y2,,Yn be k-wise independent random variables, with that the marginal distribution of Yi is identical to the marginal distribution of Xi, 1in, that is, Pr[Xi=z]=Pr[Yi=z] for any z, 1in.
Let f:RnR be a multivariate polynomial of degree at most k. Then
E[f(X1,X2,,Xn)]=E[f(Y1,Y2,,Yn)].

This phenomenon is sometimes called that the k-degree polynomials are fooled by k-wise independence. In other words, a k-degree polynomial behaves the same on the k-wise independent random variables as on the mutual independent random variables.

This theorem is implied by the following lemma.

Lemma
Let X1,X2,,Xk be k mutually independent random variables. Then
E[i=1kXi]=i=1kE[Xi].

The lemma can be proved by directly compute the expectation. We omit the detailed proof.

By the linearity of expectation, the expectation of a polynomial is reduced to the sum of the expectations of terms. For a k-degree polynomial, each term has at most k variables. Due to the above lemma, with k-wise independence, the expectation of each term behaves exactly the same as mutual independence.

Since the kth moment is the expectation of a k-degree polynomial of random variables, the tools based on the kth moment can be safely used for the k-wise independence. In particular, Chebyshev's inequality for pairwise independent random variables:

Chebyshev's inequality
Let X=i=1nXi, where X1,X2,,Xn are pairwise independent Poisson trials. Let μ=E[X].
Then
Pr[|Xμ|t]Var[X]t2=i=1nVar[Xi]t2.

Application: Derandomizing MAX-CUT

Let G(V,E) be an undirected graph, and SV be a vertex set. The cut defined by S is C(S,S¯)=|{uvEuS,vS}|.

Given as input an undirected graph G(V,E), find the SV whose cut value C(S,S¯) is maximized. This problem is called the maximum cut (MAX-CUT) problem, which is NP-hard. The decision version of one of the weighted version of the problem is one of the Karp's 21 NP-complete problems. The problem has a 0.878-approximation algorithm by rounding a semidefinite programming. Assuming that the unique game conjecture (UGC), there does not exist a poly-time algorithm with better approximation ratio unless P=NP.

Here we give a very simple 0.5-approximation algorithm. The "algorithm" has a one-line description:

  • Put each vV into S independently with probability 1/2.

We then analyze the approximation ratio of this algorithm.

For each vV, let Yv indicate whether vS, that is

Yv={1vS,0vS.

For each edge uvE, let Yuv indicate whether uv contribute to the cut C(S,S¯), i.e. whether uS,vS or uS,vS, that is

Yuv={1YuYv,0otherwise.

Then C(S,S¯)=uvEYuv. Due to the linearity of expectation,

E[C(S,S¯)]=uvEE[Yuv]=uvEPr[YuYv]=|E|2.

The maximum cut of a graph is at most |E|. Thus, the algorithm returns in expectation a cut with size at least half of the maximum cut.

We then show how to dereandomize this algorithm by pairwise independent bits.

Suppose that |V|=n and enumerate the n vertices by v1,v2,,vn in an arbitrary order. Let m=log2(n+1). Sample m bits X1,,Xm{0,1} uniformly and independently at random. Enumerate all nonempty subsets of {1,2,,m} by S1,S2,,S2m1. For each vertex vj, let Yvj=iSjXi. The MAX-CUT algorithm uses these bits to construct the solution S:

  • For j=1,2,,n, put vj into S if Yvj=1.

We have shown that Yvj, j=1,2,,n, are uniform and pairwise independent. Thus we still have that Pr[YuYv]=12. The above analysis still holds, so that the algorithm returns in expectation a cut with size at least |E|2.

Finally, we notice that there are only m=log2(n+1) total random bits in the new algorithm. We can enumerate all 2m2(n+1) possible strings of m bits, run the above algorithm with the bit strings as the "random sources", and output the maximum cut returned. There must exist a bit string X1,,Xm{0,1} on which the algorithm returns a cut of size |E|2 (why?). This gives us a deterministic polynomial time (actually O(n2) time) 1/2-approximation algorithm.

Application: Two-point sampling

Consider a Monte Carlo randomized algorithm with one-sided error for a decision problem f. We formulate the algorithm as a deterministic algorithm A that takes as input x and a uniform random number r[p] where p is a prime, such that for any input x:

  • If f(x)=1, then Pr[A(x,r)=1]12, where the probability is taken over the random choice of r.
  • If f(x)=0, then A(x,r)=0 for any r.

We call r the random source for the algorithm.

For the x that f(x)=1, we call the r that makes A(x,r)=1 a witness for x. For a positive x, at least half of [p] are witnesses. The random source r has polynomial number of bits, which means that p is exponentially large, thus it is infeasible to find the witness for an input x by exhaustive search. Deterministic overcomes this by having sophisticated deterministic rules for efficiently searching for a witness. Randomization, on the other hard, reduce this to a bit of luck, by randomly choosing an r and winning with a probability of 1/2.

We can boost the accuracy (equivalently, reduce the error) of any Monte Carlo randomized algorithm with one-sided error by running the algorithm for a number of times.

Suppose that we sample t values r1,r2,,rt uniformly and independently from [p], and run the following scheme:

B(x,r1,r2,,rt):
return i=1tA(x,ri);

That is, return 1 if any instance of A(x,ri)=1. For any x that f(x)=1, due to the independence of r1,r2,,rt, the probability that B(x,r1,r2,,rt) returns an incorrect result is at most 2t. On the other hand, B never makes mistakes for the x that f(x)=0 since A has no false positives. Thus, the error of the Monte Carlo algorithm is reduced to 2t.

Sampling t mutually independent random numbers from [p] can be quite expensive since it requires Ω(tlogp) random bits. Suppose that we can only afford O(logp) random bits. In particular, we sample two independent uniform random number a and b from [p]. If we use a and b directly bu running two independent instances A(x,a) and A(x,b), we only get an error upper bound of 1/4.

The following scheme reduces the error significantly with the same number of random bits:

Algorithm

Choose two independent uniform random number a and b from [p]. Construct t random number r1,r2,,rt by:

1it,let ri=(ai+b)modp.

Run B(x,r1,r2,,rt):.

Due to the discussion in the last section, we know that for tp, r1,r2,,rt are pairwise independent and uniform over [p]. Let Xi=A(x,ri) and X=i=1tXi. Due to the uniformity of ri and our definition of A, for any x that f(x)=1, it holds that

Pr[Xi=1]=Pr[A(x,ri)=1]12.

By the linearity of expectations,

E[X]=i=1tE[Xi]=i=1tPr[Xi=1]t2.

Since Xi is Bernoulli trial with a probability of success at least p=1/2. We can estimate the variance of each Xi as follows.

Var[Xi]=p(1p)14.

Applying Chebyshev's inequality, we have that for any x that f(x)=1,

Pr[i=1tA(x,ri)=0]=Pr[X=0]Pr[|XE[X]|E[X]]Pr[|XE[X]|t2]4t2i=1tVar[Xi]1t.

The error is reduced to 1/t with only two random numbers. This scheme works as long as tp.


Universal Hashing

Hashing is one of the oldest tools in Computer Science. Knuth's memorandum in 1963 on analysis of hash tables is now considered to be the birth of the area of analysis of algorithms.

  • Knuth. Notes on "open" addressing, July 22 1963. Unpublished memorandum.

The idea of hashing is simple: an unknown set S of n data items (or keys) are drawn from a large universe U=[N] where Nn; in order to store S in a table of M entries (slots), we assume a consistent mapping (called a hash function) from the universe U to a small range [M].

This idea seems clever: we use a consistent mapping to deal with an arbitrary unknown data set. However, there is a fundamental flaw for hashing.

  • For sufficiently large universe (N>M(n1)), for any function, there exists a bad data set S, such that all items in S are mapped to the same entry in the table.

A simple use of pigeonhole principle can prove the above statement.

To overcome this situation, randomization is introduced into hashing. We assume that the hash function is a random mapping from [N] to [M]. In order to ease the analysis, the following ideal assumption is used:

Simple Uniform Hash Assumption (SUHA or UHA, a.k.a. the random oracle model):

A uniform random function h:[N][M] is available and the computation of h is efficient.

Families of universal hash functions

The assumption of completely random function simplifies the analysis. However, in practice, truly uniform random hash function is extremely expensive to compute and store. Thus, this simple assumption can hardly represent the reality.

There are two approaches for implementing practical hash functions. One is to use ad hoc implementations and wish they may work. The other approach is to construct class of hash functions which are efficient to compute and store but with weaker randomness guarantees, and then analyze the applications of hash functions based on this weaker assumption of randomness.

This route was took by Carter and Wegman in 1977 while they introduced universal families of hash functions.

Definition (universal hash families)
Let [N] be a universe with NM. A family of hash functions H from [N] to [M] is said to be k-universal if, for any items x1,x2,,xk[N] and for a hash function h chosen uniformly at random from H, we have
Pr[h(x1)=h(x2)==h(xk)]1Mk1.
A family of hash functions H from [N] to [M] is said to be strongly k-universal if, for any items x1,x2,,xk[N], any values y1,y2,,yk[M], and for a hash function h chosen uniformly at random from H, we have
Pr[h(x1)=y1h(x2)=y2h(xk)=yk]=1Mk.

In particular, for a 2-universal family H, for any elements x1,x2[N], a uniform random hH has

Pr[h(x1)=h(x2)]1M.

For a strongly 2-universal family H, for any elements x1,x2[N] and any values y1,y2[M], a uniform random hH has

Pr[h(x1)=y1h(x2)=y2]=1M2.

This behavior is exactly the same as uniform random hash functions on any pair of inputs. For this reason, a strongly 2-universal hash family are also called pairwise independent hash functions.

2-universal hash families

The construction of pairwise independent random variables via modulo a prime introduced in Section 1 already provides a way of constructing a strongly 2-universal hash family.

Let p be a prime. The function ha,b:[p][p] is defined by

ha,b(x)=(ax+b)modp,

and the family is

H={ha,ba,b[p]}.
Lemma
H is strongly 2-universal.
Proof.
In Section 1, we have proved the pairwise independence of the sequence of (ai+b)modp, for i=0,1,,p1, which directly implies that H is strongly 2-universal.
The original construction of Carter-Wegman

What if we want to have hash functions from [N] to [M] for non-prime N and M? Carter and Wegman developed the following method.

Suppose that the universe is [N], and the functions map [N] to [M], where NM. For some prime pN, let

ha,b(x)=((ax+b)modp)modM,

and the family

H={ha,b1ap1,b[p]}.

Note that unlike the first construction, now a0.

Lemma (Carter-Wegman)
H is 2-universal.
Proof.
Due to the definition of H, there are p(p1) many different hash functions in H, because each hash function in H corresponds to a pair of 1ap1 and b[p]. We only need to count for any particular pair of x1,x2[N] that x1x2, the number of hash functions that h(x1)=h(x2).

We first note that for any x1x2, ax1+bax2+b(modp). This is because ax1+bax2+b(modp) would imply that a(x1x2)0(modp), which can never happen since 1ap1 and x1x2 (note that x1,x2[N] for an Np). Therefore, we can assume that (ax1+b)modp=u and (ax2+b)modp=v for uv.

Due to the Chinese remainder theorem, for any x1,x2[N] that x1x2, for any u,v[p] that uv, there is exact one solution to (a,b) satisfying:

{ax1+bu(modp)ax2+bv(modp).

After modulo M, every u[p] has at most p/M1 many v[p] that vu but vu(modM). Therefore, for every pair of x1,x2[N] that x1x2, there exist at most p(p/M1)p(p1)/M pairs of 1ap1 and b[p] such that ((ax1+b)modp)modM=((ax2+b)modp)modM, which means there are at most p(p1)/M many hash functions hH having h(x1)=h(x2) for x1x2. For h uniformly chosen from H, for any x1x2,

Pr[h(x1)=h(x2)]p(p1)/Mp(p1)=1M.

We prove that H is 2-universal.

A construction used in practice

The main issue of Carter-Wegman construction is the efficiency. The mod operation is very slow, and has been so for more than 30 years.

The following construction is due to Dietzfelbinger et al. It was published in 1997 and has been practically used in various applications of universal hashing.

The family of hash functions is from [2u] to [2v]. With a binary representation, the functions map binary strings of length u to binary strings of length v. Let

ha(x)=axmod2u2uv,

and the family

H={haa[2v] and a is odd}.

This family of hash functions does not exactly meet the requirement of 2-universal family. However, Dietzfelbinger et al proved that H is close to a 2-universal family. Specifically, for any input values x1,x2[2u], for a uniformly random hH,

Pr[h(x1)=h(x2)]12v1.

So H is within an approximation ratio of 2 to being 2-universal. The proof uses the fact that odd numbers are relative prime to a power of 2.

The function is extremely simple to compute in c language. We exploit that C-multiplication (*) of unsigned u-bit numbers is done mod2u, and have a one-line C-code for computing the hash function:

h_a(x) = (a*x)>>(u-v)

The bit-wise shifting is a lot faster than modular. It explains the popularity of this scheme in practice than the original Carter-Wegman construction.

Collision number

Consider a 2-universal family H of hash functions from [N] to [M]. Let h be a hash function chosen uniformly from H. For a fixed set S of n distinct elements from [N], say S={x1,x2,,xn}, the elements are mapped to the hash values h(x1),h(x2),,h(xn). This can be seen as throwing n balls to M bins, with pairwise independent choices of bins.

As in the balls-into-bins with full independence, we are curious about the questions such as the birthday problem or the maximum load. These questions are interesting not only because they are natural to ask in a balls-into-bins setting, but in the context of hashing, they are closely related to the performance of hash functions.

The old techniques for analyzing balls-into-bins rely too much on the independence of the choice of the bin for each ball, therefore can hardly be extended to the setting of 2-universal hash families. However, it turns out several balls-into-bins questions can somehow be answered by analyzing a very natural quantity: the number of collision pairs.

A collision pair for hashing is a pair of elements x1,x2S which are mapped to the same hash value, i.e. h(x1)=h(x2). Formally, for a fixed set of elements S={x1,x2,,xn}, for any 1i,jn, let the random variable

Xij={1if h(xi)=h(xj),0otherwise.

The total number of collision pairs among the n items x1,x2,,xn is

X=i<jXij.

Since H is 2-universal, for any ij,

Pr[Xij=1]=Pr[h(xi)=h(xj)]1M.

The expected number of collision pairs is

E[X]=E[i<jXij]=i<jE[Xij]=i<jPr[Xij=1](n2)1M<n22M.

In particular, for n=M, i.e. n items are mapped to n hash values by a pairwise independent hash function, the expected collision number is E[X]<n22M=n2.

Birthday problem

In the context of hash functions, the birthday problem ask for the probability that there is no collision at all. Since collision is something that we want to avoid in the applications of hash functions, we would like to lower bound the probability of zero-collision, i.e. to upper bound the probability that there exists a collision pair.

The above analysis gives us an estimation on the expected number of collision pairs, such that E[X]<n22M. Apply the Markov's inequality, for 0<ϵ<1, we have

Pr[Xn22ϵM]Pr[X1ϵE[X]]ϵ.

When n2ϵM, the number of collision pairs is X1 with probability at most ϵ, therefore with probability at least 1ϵ, there is no collision at all. Therefore, we have the following theorem.

Theorem
If h is chosen uniformly from a 2-universal family of hash functions mapping the universe [N] to [M] where NM, then for any set S[N] of n items, where n2ϵM, the probability that there exits a collision pair is
Pr[collision occurs]ϵ.

Recall that for mutually independent choices of bins, for some n=2Mln(1/ϵ), the probability that a collision occurs is about ϵ. For constant ϵ, this gives an essentially same bound as the pairwise independent setting. Therefore, the behavior of pairwise independent hash function is essentially the same as the uniform random hash function for the birthday problem. This is easy to understand, because birthday problem is about the behavior of collisions, and the definition of 2-universal hash function can be interpreted as "functions that the probability of collision is as low as a uniform random function".