Randomized Algorithms (Spring 2010)/Introduction
Randomized Quicksort
For an input set S presented in an arbitrary order, the Quicksort algorithm sorts [math]\displaystyle{ S }[/math]. The algorithm is described as follows:
- if [math]\displaystyle{ |S|\gt 1 }[/math] do:
- pick an element [math]\displaystyle{ x }[/math] from [math]\displaystyle{ S }[/math] as the pivot;
- partition [math]\displaystyle{ S }[/math] into [math]\displaystyle{ S_1 }[/math], [math]\displaystyle{ \{x\} }[/math], and [math]\displaystyle{ S_2 }[/math], where all elements in [math]\displaystyle{ S_1 }[/math] are smaller than [math]\displaystyle{ x }[/math] and all elements in [math]\displaystyle{ S_2 }[/math] are larger than [math]\displaystyle{ x }[/math];
- recursively sort [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math];
The time complexity of this sorting algorithm is measured by the number of comparisons.
For the deterministic quicksort algorithm, the pivot element is chosen deterministically (usually the first one in the sequence [math]\displaystyle{ S }[/math]). This will make the worst-case time complexity [math]\displaystyle{ \Omega(n^2) }[/math], which means there exists a bad case [math]\displaystyle{ S }[/math], sorting which will cost us [math]\displaystyle{ \Omega(n^2) }[/math] comparisons, every time!
It is just so unfair to have an unbeatable case for this brilliant algorithm. So we tweak the algorithm a little bit:
Algorithm: RandQSort
- if [math]\displaystyle{ |S|\gt 1 }[/math] do:
- uniformly pick a random element [math]\displaystyle{ x }[/math] from [math]\displaystyle{ S }[/math] as the pivot;
- partition [math]\displaystyle{ S }[/math] into [math]\displaystyle{ S_1 }[/math], [math]\displaystyle{ \{x\} }[/math], and [math]\displaystyle{ S_2 }[/math], where all elements in [math]\displaystyle{ S_1 }[/math] are smaller than [math]\displaystyle{ x }[/math] and all elements in [math]\displaystyle{ S_2 }[/math] are larger than [math]\displaystyle{ x }[/math];
- recursively sort [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math];
Analysis of RandQSort
Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input [math]\displaystyle{ S }[/math]. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to Linearity of Expectation.
Let [math]\displaystyle{ a_i }[/math] denote the [math]\displaystyle{ i }[/math]th smallest element in [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ X_{ij}\in\{0,1\} }[/math] be the random variable which indicates whether [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared during the execution of RandQSort. That is:
[math]\displaystyle{ \begin{align} X_{ij} &= \begin{cases} 1 & a_i\mbox{ and }a_j\mbox{ are compared}\\ 0 & \mbox{otherwise} \end{cases}. \end{align} }[/math]
Elements [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observation:
Claim 1: Every pair of [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared at most once.
Therefore the sum of [math]\displaystyle{ X_{ij} }[/math] for all pair [math]\displaystyle{ \{i,j\} }[/math] gives the total number of comparisons. The expected number of comparisons is [math]\displaystyle{ \mathbb{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] }[/math]. Due to Linearity of Expectation, [math]\displaystyle{ \mathbb{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] = \sum_{i=1}^n\sum_{j\gt i}\mathbb{E}\left[X_{ij}\right] }[/math]. Our next step is to analyze [math]\displaystyle{ \mathbb{E}\left[X_{ij}\right] }[/math] for each [math]\displaystyle{ \{i,j\} }[/math].
By the definition of expectation and [math]\displaystyle{ X_{ij} }[/math], it holds that
[math]\displaystyle{ \begin{align} \mathbb{E}\left[X_{ij}\right] &= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\ &= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}]. \end{align} }[/math]
We are going to bound this probability.
Claim 2: [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared if and only if one of them is chosen as pivot when they are still in the same subset.
This is easy to verify: just check the algorithm. The next one is a bit complicated.
Claim 3: If [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are still in the same subset then all [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] are in the same subset.
We can verify this by induction. Initially, [math]\displaystyle{ S }[/math] itself has the property described above; and partitioning any [math]\displaystyle{ S }[/math] with the property into [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math] will preserve the property for both [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math]. Therefore Claim 3 holds.
Combining Claim 2 and 3, we have:
Claim 4: [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared only if one of [math]\displaystyle{ \{a_i, a_j\} }[/math] is chosen from [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math].
And apparently,
Claim 5: Every one of [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] is chosen with the same probability.
This is because our RandQSort chooses the pivot uniformly at random.
Claim 4 and Claim 5 together give:
[math]\displaystyle{ \begin{align} \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] &\le \frac{2}{j-i+1}. \end{align} }[/math]
Perhaps you feel confused about the above argument. You may ask: "The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?" Good question! Let's see what really happens by looking closely.
For any pair [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math], initially [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] are all in the same set [math]\displaystyle{ S }[/math] (obviously!). During the execution of the algorithm, the set which containing [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] are shrinking (due to the pivoting), until one of [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among [math]\displaystyle{ \{a_i,a_j\} }[/math]. So we really care about "the last" pivoting before [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] is split.
Formally, let [math]\displaystyle{ Y }[/math] be the random variable denoting the pivot element. We know that for each [math]\displaystyle{ a_k\in\{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math], [math]\displaystyle{ Y=a_k }[/math] with the same probability, and [math]\displaystyle{ Y\not\in\{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math] with an unknown probability (remember that there might be other elements in the same subset with [math]\displaystyle{ \{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\} }[/math]). The probability we are looking for is actually [math]\displaystyle{ \Pr[Y\in \{a_i,a_j\}\mid Y\in\{a_i,a_{i+1},\ldots,a_{j-1},a_{j}\}] }[/math], which is always [math]\displaystyle{ \frac{2}{j-i+1} }[/math].
Summing all up:
[math]\displaystyle{ \begin{align} \mathbb{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] &= \sum_{i=1}^n\sum_{j\gt i}\mathbb{E}\left[X_{ij}\right]\\ &\le \sum_{i=1}^n\sum_{j\gt i}\frac{2}{j-i+1}\\ &= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} & & (\mbox{Let }k=j-i+1)\\ &\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\ &= 2n\sum_{k=1}^{n}\frac{1}{k}\\ &= 2n H(n). \end{align} }[/math]
[math]\displaystyle{ H(n) }[/math] is the [math]\displaystyle{ n }[/math]th Harmonic number. It holds that
[math]\displaystyle{ \begin{align}H(n) = \ln n+O(1)\end{align} }[/math].
Therefore, for an arbitrary input [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] numbers, the expected number of comparisons taken by RandQSort to sort [math]\displaystyle{ S }[/math] is [math]\displaystyle{ \mathrm{O}(n\log n) }[/math].