Randomized Algorithms (Spring 2010)/Hashing, limited independence: Difference between revisions
imported>WikiSysop |
imported>WikiSysop |
||
Line 49: | Line 49: | ||
\end{align}</math> | \end{align}</math> | ||
|} | |} | ||
==== Construction via XOR ==== | |||
Suppose we have <math>m</math> mutually independent and uniform random bits <math>X_1,\ldots, X_m</math>. We are going to extract <math>n=2^m-1</math> pairwise independent bits from these <math>m</math> mutually independent bits. | |||
Enumerate all the nonempty subsets of <math>\{1,2,\ldots,m\}</math> in some order. Let <math>S_j</math> be the <math>j</math>th subset. Let | |||
:<math> | |||
Y_j=\bigoplus_{i\in S_j} X_i, | |||
</math> | |||
where <math>\bigoplus</math> is the exclusive-or, whose truth table is as follows. | |||
:{|cellpadding="4" border="1" | |||
|- | |||
|<math>a</math> | |||
|<math>b</math> | |||
|<math>a</math><math>\oplus</math><math>b</math> | |||
|- | |||
| 0 || 0 ||align="center"| 0 | |||
|- | |||
| 0 || 1 ||align="center"| 1 | |||
|- | |||
| 1 || 0 ||align="center"| 1 | |||
|- | |||
| 1 || 1 ||align="center"| 0 | |||
|} | |||
There are <math>n=2^m-1</math> such <math>Y_j</math>, because there are <math>2^m-1</math> nonempty subsets of <math>\{1,2,\ldots,m\}</math>. | |||
==== Construction via modulo a prime ==== | |||
=== Collision number === | === Collision number === |
Revision as of 08:43, 30 March 2010
Limited Independence
k-wise independence
Recall the definition of independence between events:
Definition (Independent events):
|
Similarly, we can define independence between random variables:
Definition (Independent variables):
|
Mutual independence is an ideal condition of independence. The limited notion of independence is usually defined by the k-wise independence.
Definition (k-wise Independenc):
|
A very common case is pairwise independence, i.e. the 2-wise independence.
Definition (pairwise Independent random variables):
|
Construction via XOR
Suppose we have [math]\displaystyle{ m }[/math] mutually independent and uniform random bits [math]\displaystyle{ X_1,\ldots, X_m }[/math]. We are going to extract [math]\displaystyle{ n=2^m-1 }[/math] pairwise independent bits from these [math]\displaystyle{ m }[/math] mutually independent bits.
Enumerate all the nonempty subsets of [math]\displaystyle{ \{1,2,\ldots,m\} }[/math] in some order. Let [math]\displaystyle{ S_j }[/math] be the [math]\displaystyle{ j }[/math]th subset. Let
- [math]\displaystyle{ Y_j=\bigoplus_{i\in S_j} X_i, }[/math]
where [math]\displaystyle{ \bigoplus }[/math] is the exclusive-or, whose truth table is as follows.
[math]\displaystyle{ a }[/math] [math]\displaystyle{ b }[/math] [math]\displaystyle{ a }[/math][math]\displaystyle{ \oplus }[/math][math]\displaystyle{ b }[/math] 0 0 0 0 1 1 1 0 1 1 1 0
There are [math]\displaystyle{ n=2^m-1 }[/math] such [math]\displaystyle{ Y_j }[/math], because there are [math]\displaystyle{ 2^m-1 }[/math] nonempty subsets of [math]\displaystyle{ \{1,2,\ldots,m\} }[/math].
Construction via modulo a prime
Collision number
By seeing the lodas of bins as a vector of random variables, called load vector, the expectation of the maximum load is the expected [math]\displaystyle{ L_\infty }[/math]-norm of this load vector. Since there are [math]\displaystyle{ m }[/math] balls, the [math]\displaystyle{ L_1 }[/math]-norm of the load vector is definitely [math]\displaystyle{ m }[/math]. We ask about something between these two extremes, specifically, the sum of the squares of the loads. We will see that this quantity approximately gives the expected number of collision pairs in the balls-into-bins game.
For any two balls, we say that there is a collision between them if they are thrown into the same bin. Let [math]\displaystyle{ Y_{ij} }[/math] indicates whether ball [math]\displaystyle{ i }[/math] and ball[math]\displaystyle{ j }[/math] collides, i.e.
- [math]\displaystyle{ Y_{ij} = \begin{cases} 1 & \text{if ball }i\text{ and ball }j\text{ are thrown into the same bin},\\ 0 & \text{otherwise.} \end{cases} }[/math]
The total number of collision pairs is [math]\displaystyle{ Y=\sum_{i\lt j} Y_{ij} }[/math]. Since each ball is uniformly and independently thrown into one of the [math]\displaystyle{ n }[/math] bins, for any particular [math]\displaystyle{ i\neq j }[/math], the probability that ball [math]\displaystyle{ i }[/math] and ball[math]\displaystyle{ j }[/math] are thrown into the same bin is
- [math]\displaystyle{ \begin{align} \Pr[Y_{ij}=1] &= \Pr[\text{ball }i\text{ and ball }j\text{ are in the same bin}]\\ &= \sum_{k=1}^n\Pr[\text{ball }i\text{ is in bin }k]\cdot\Pr[\text{ball }i\text{ and ball }j\text{ are in the same bin}\mid \text{ball }i\text{ is in bin }k]\\ &=n\cdot\frac{1}{n}\cdot\frac{1}{n}\\ &= \frac{1}{n}. \end{align} }[/math]
Therefore,
- [math]\displaystyle{ \mathbf{E}[Y]=\mathbf{E}\left[\sum_{i\lt j}Y_{ij}\right]=\sum_{i\lt j}\mathbf{E}[Y_{ij}]=\sum_{i\lt j}\Pr[Y_{ij}=1]={m\choose 2}\frac{1}{n}. }[/math]
For [math]\displaystyle{ 1\le k\le n }[/math], let [math]\displaystyle{ X_k }[/math] be the load of the [math]\displaystyle{ k }[/math]-th bin. The number of collision pairs in the [math]\displaystyle{ k }[/math]-th bin can be computed as [math]\displaystyle{ {X_k\choose 2} }[/math], therefore the total number of collision pairs is also given by
- [math]\displaystyle{ Y=\sum_{i=1}^n {X_i\choose 2}. }[/math]
The sum of the squares of the loads is
- [math]\displaystyle{ \sum_{i=1}^n X_{i}^2 =\sum_{i=1}^n \left(X_i+X_i(X_i-1)\right) =m+2\sum_{i=1}^n{X_i\choose 2} =m+2Y. }[/math]
Its expectation is
- [math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^nX_i^2\right]=m+2\mathbf{E}[Y]=m+2{m\choose 2}\frac{1}{n}=m+\frac{m(m-1)}{n}. }[/math]
Bucket sorting
Let [math]\displaystyle{ N=2^\ell }[/math] be a power of 2. Let [math]\displaystyle{ [N] }[/math] be an integral domain, so each number from the domain can be represented as a binary number of [math]\displaystyle{ \ell }[/math] bits. Suppose that we are sorting [math]\displaystyle{ n }[/math] integers from the domain [math]\displaystyle{ [N] }[/math]. For convenience, we assume that [math]\displaystyle{ n }[/math] is a power of 2, where [math]\displaystyle{ n=2^k }[/math].
The bucket sort algorithm is described as follows:
- Assign the [math]\displaystyle{ n }[/math] integers into [math]\displaystyle{ n }[/math] buckets [math]\displaystyle{ 0,1,2,\ldots,n-1 }[/math], where integer [math]\displaystyle{ x }[/math] is assigned to bucket [math]\displaystyle{ \left\lfloor\frac{x}{N/n}\right\rfloor }[/math].
- Sort the integers in each bucket with a square-time comparison-based sorting algorithm (e.g. insertion sort).
- Concatenate the sorted buckets.
Note that if [math]\displaystyle{ i\lt j }[/math], then any integer in bucket [math]\displaystyle{ i }[/math] is smaller than any integer in bucket [math]\displaystyle{ j }[/math], thus the algorithm is correct.
The bucket sort algorithm described above is a deterministic algorithm. We apply the probabilistic analysis to its expected running time on average-case input. The running time of the first and the last line are both linear to [math]\displaystyle{ n }[/math]. If we assume that both the domain size [math]\displaystyle{ N }[/math] and the number of integers to sort [math]\displaystyle{ n }[/math] are both power-of-2's, then it will only take a shifting operation [math]\displaystyle{ \gt \gt }[/math] to compute the bucket for an integer. The problem is to bound the running time for the second line.
For a uniformly random tuple of [math]\displaystyle{ n }[/math] integers from [math]\displaystyle{ [N]^n }[/math], each integer is assigned to a uniformly and independently random bucket. Let [math]\displaystyle{ X_i }[/math] be the random variable which represents the number of integers in bucket [math]\displaystyle{ i }[/math]. The total running time of the second line is given by [math]\displaystyle{ \sum_{i\in[n]}X_i^2 }[/math]. The distributions of [math]\displaystyle{ X_i }[/math]'s follows exactly the distributions of the loads of bins for a balls-into-bins game with [math]\displaystyle{ n }[/math] balls and [math]\displaystyle{ n }[/math] bins. By our analysis of the collision number of the balls-into-bins game,
- [math]\displaystyle{ \mathbf{E}\left[\sum_{i\in[n]}X_i^2\right]=n+\frac{n(n-1)}{n}=2n-1. }[/math]
The expected running time of the bucket sorting on uniformly random input is [math]\displaystyle{ O(n) }[/math].
Hashing
Basic ideas
Universal hash families
Perfect hashing
In a hash table, [math]\displaystyle{ m }[/math] keys are stored in [math]\displaystyle{ n }[/math] slots, and the keys are mapped to slots by a hash function. A collision occurs if two keys are mapped to the same slots. There are various strategies for resolving collisions, such as by chaining, or by "open addressing" techniques like linear probing or double hashing. However, we could also just wish there is no collision.
A hash function [math]\displaystyle{ h:U\rightarrow[n] }[/math] is perfect for a set [math]\displaystyle{ S\subseteq U }[/math] of keys if [math]\displaystyle{ h }[/math] maps all keys in [math]\displaystyle{ S }[/math] to different values, i.e. there is no collision.
For hash tables, the hash function is a random mapping from keys to values. To simplify the analysis, we assume that the hash function is uniformly random function [math]\displaystyle{ h:U\rightarrow[n] }[/math]. This assumption is called the Simple Uniform Hash Assumption (SUHA or UHA), which is a standard assumption used in the analysis of hashing
With UHA, the [math]\displaystyle{ m }[/math] different keys are uniformly and independently assigned to [math]\displaystyle{ n }[/math] slots. Due to our analysis of the birthday problem, for some [math]\displaystyle{ m=O(\sqrt{n}) }[/math], with good chance (a constant probability), the hash function is perfect. However, as [math]\displaystyle{ m }[/math] grows larger, it becomes quite unlikely that the hash function is perfect.