概率论与数理统计 (Spring 2023)/Average-case analysis of QuickSort: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
Line 77: Line 77:
</math>
</math>
我们有若干种方法从这一递归式获得关于<math>t(n)=\mathbb{E}[X_n]</math>的真相:
我们有若干种方法从这一递归式获得关于<math>t(n)=\mathbb{E}[X_n]</math>的真相:
一种是使用'''生成函数'''等工具来求解这样的递归方程,例如使用[[组合数学_(Fall_2023)/Generating_functions#The_quicksort_recursion '''本学期组合数学课上的分析''']];
一种是使用'''生成函数'''等工具来求解这样的递归方程,例如使用[[组合数学_(Fall_2023)/Generating_functions#The_quicksort_recursion|'''本学期组合数学课上的分析''']];
另一种是采用数学归纳法来对我们猜想的界进行验证,例如从<math>t(n)\le c n \ln n</math>的归纳假设出发,令<math>c</math>尽量小,不难验证<math>t(n)\le 4 n \ln n</math>。
另一种是采用数学归纳法来对我们猜想的界进行验证,例如从<math>t(n)\le c n \ln n</math>的归纳假设出发,令<math>c</math>尽量小,不难验证<math>t(n)\le 4 n \ln n</math>。



Revision as of 13:07, 12 April 2023

快速排序Quicksort是由Tony Hoare发现的基于比较的(comparison-based)排序算法。该算法的伪代码描述如下(为方便起见,假设数组元素互不相同——更一般情况的分析易推广得到):

 QSort(A): 输入A[1...n]是存有n个不同数字的数组
 if n>1 then
      pivot = A[1];
      将A中<pivot的元素存于数组L,将>pivot的元素存于数组R; \\保持内部元素之间相对顺序
      递归调用QSort(L)和QSort(R);

该伪代码描述省略了对数组的存取归并等具体实现的交代。可以不妨认为算法中的[math]\displaystyle{ L }[/math][math]\displaystyle{ R }[/math]其实就分别是原数组[math]\displaystyle{ A }[/math]的前半部分[math]\displaystyle{ A[1...i-1] }[/math]和后半部分[math]\displaystyle{ A[i...n] }[/math],其中[math]\displaystyle{ i }[/math]表示pivot(“基准”、“轴点”、“分水岭”元素)在数组[math]\displaystyle{ A }[/math]中的序数,即pivot是数组[math]\displaystyle{ A }[/math]中第[math]\displaystyle{ i }[/math]小的数。

作为基于比较的排序算法,我们将算法进行的元素间的比较次数作为算法的复杂性度量。在最坏情况worst-case)输入下,我们描述的“快速排序”算法所使用的比较次数可以达到[math]\displaystyle{ \Theta(n^2) }[/math]。而且有趣的是,这一[math]\displaystyle{ \Omega(n^2) }[/math]的复杂度下界是在输入数组是一个已排好序的数组时达到的:此时pivot会将数组[math]\displaystyle{ A }[/math]划分为大小极不平衡的子数组[math]\displaystyle{ L }[/math][math]\displaystyle{ R }[/math],递归树也因此极不平衡,而算法使用的比较次数可因此达到[math]\displaystyle{ (n-1)+(n-2)+(n-3)+\cdots+1=\Omega(n^2) }[/math]

现在我们来分析这一算法的平均情况average-case)复杂度。

快速排序算法的平均复杂度分析 I(基于全期望法则)

令输入数组[math]\displaystyle{ A }[/math][math]\displaystyle{ n }[/math]个元素的均匀分布的随机排列。不难验证:上述快速排序算法使用的比较次数,仅与[math]\displaystyle{ A }[/math]中元素的相对大小有关、而与具体数值无关。因此,不妨假设[math]\displaystyle{ A }[/math][math]\displaystyle{ n }[/math]次对称群[math]\displaystyle{ \mathcal{S}_n }[/math]中均匀分布的随机[math]\displaystyle{ n }[/math]元排列。

令随机变量[math]\displaystyle{ X_n }[/math]表示在该随机输入[math]\displaystyle{ A }[/math]下,快速排序算法使用的总比较次数。同时令[math]\displaystyle{ t(n)=\mathbb{E}[X_n] }[/math]表示其期望。正如刚刚解释的,[math]\displaystyle{ t(n) }[/math]是良定义的,因为期望值[math]\displaystyle{ \mathbb{E}[X_n] }[/math]仅与[math]\displaystyle{ n }[/math]有关。

[math]\displaystyle{ 1\le i\le n }[/math],定义[math]\displaystyle{ B_i }[/math]为如下事件:

  • pivot在数组[math]\displaystyle{ A }[/math]中是第[math]\displaystyle{ i }[/math]小的元素。

[math]\displaystyle{ B_1,B_2,\ldots,B_n }[/math]构成了对所有情况(样本空间)的一个划分。且每个事件[math]\displaystyle{ B_i }[/math]发生的概率有:

[math]\displaystyle{ \Pr(B_i)=\frac{(n-1)!}{n!}=\frac{1}{n} }[/math].

这是因为事件[math]\displaystyle{ B_i }[/math]等价于一个均匀分布的随机排列[math]\displaystyle{ \pi:[n]\xrightarrow[\text{onto}]{\text{1-1}}[n] }[/math][math]\displaystyle{ \pi(1)=i }[/math],而这件事的概率为[math]\displaystyle{ \frac{1}{n} }[/math]

因此,根据全期望法则law of total expectation),有:

[math]\displaystyle{ \begin{align} t(n) = \mathbb{E}[X_n] &= \sum_{i=1}^n\mathbb{E}[X_n\mid B_i]\Pr(B_i) = \frac{1}{n}\sum_{i=1}^n\mathbb{E}[X_n\mid B_i] \end{align} }[/math].

而当事件[math]\displaystyle{ B_i }[/math]发生时,即当pivot恰为数组[math]\displaystyle{ A }[/math]中第[math]\displaystyle{ i }[/math]小的元素, 算法会将[math]\displaystyle{ A }[/math]中比pivot小的[math]\displaystyle{ (i-1) }[/math]个元素放入子数组[math]\displaystyle{ L }[/math]、将比pivot大的[math]\displaystyle{ (n-i) }[/math]个元素放入子数组[math]\displaystyle{ R }[/math],这总共会花费[math]\displaystyle{ n-1 }[/math]次比较。 而且,由于[math]\displaystyle{ L }[/math][math]\displaystyle{ R }[/math]的内部元素之间分别保持了原相对顺序,因此不难验证[math]\displaystyle{ L }[/math][math]\displaystyle{ R }[/math]各自内部元素的相对顺序,分别同分布于[math]\displaystyle{ (i-1) }[/math]元均匀随机排列和[math]\displaystyle{ (n-i) }[/math]元均匀随机排列。 因此,在[math]\displaystyle{ B_i }[/math]发生的情况下,递归调用QSort[math]\displaystyle{ (L) }[/math]QSort[math]\displaystyle{ (R) }[/math]各自所使用的总比较次数分别为[math]\displaystyle{ X_{i-1} }[/math][math]\displaystyle{ X_{n-i} }[/math]。 综上所述,有如下恒等关系:

[math]\displaystyle{ \mathbb{E}[X_n\mid B_i] = \mathbb{E}[n-1+X_{i-1}+X_{n-i}] }[/math].

因此,上述全期望因此可被计算如下:

[math]\displaystyle{ \begin{align} t(n) = \mathbb{E}[X_n] &= \frac{1}{n}\sum_{i=1}^n\mathbb{E}[X_n\mid B_i]\\ &= \frac{1}{n}\sum_{i=1}^n\mathbb{E}[n-1+X_{i-1}+X_{n-i}]\\ &= n-1+\frac{2}{n}\sum_{i=0}^{n-1}\mathbb{E}[X_{i}] &\text{(linearity of expectation)}\\ &= n-1+\frac{2}{n}\sum_{i=0}^{n-1}t(i). \end{align} }[/math]

我们因此建立了平均复杂度[math]\displaystyle{ t(n)=\mathbb{E}[X_n] }[/math]的如下递归式:

[math]\displaystyle{ \begin{align} t(n) &= n-1+\frac{2}{n}\sum_{i=0}^{n-1}t(i) && \text{if }n\gt 1;\\ t(n) &= 0 && \text{if }0\le n\le 1. \end{align} }[/math]

我们有若干种方法从这一递归式获得关于[math]\displaystyle{ t(n)=\mathbb{E}[X_n] }[/math]的真相: 一种是使用生成函数等工具来求解这样的递归方程,例如使用本学期组合数学课上的分析; 另一种是采用数学归纳法来对我们猜想的界进行验证,例如从[math]\displaystyle{ t(n)\le c n \ln n }[/math]的归纳假设出发,令[math]\displaystyle{ c }[/math]尽量小,不难验证[math]\displaystyle{ t(n)\le 4 n \ln n }[/math]

快速排序算法的平均复杂度分析 II(基于期望的线性)

Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input [math]\displaystyle{ S }[/math]. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to Linearity of Expectation.

Let [math]\displaystyle{ a_i }[/math] denote the [math]\displaystyle{ i }[/math]th smallest element in [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ X_{ij}\in\{0,1\} }[/math] be the random variable which indicates whether [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared during the execution of RandQSort. That is:

[math]\displaystyle{ \begin{align} X_{ij} &= \begin{cases} 1 & a_i\mbox{ and }a_j\mbox{ are compared}\\ 0 & \mbox{otherwise} \end{cases}. \end{align} }[/math]

Elements [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observation:

Claim 1: Every pair of [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared at most once.

Therefore the sum of [math]\displaystyle{ X_{ij} }[/math] for all pair [math]\displaystyle{ \{i, j\} }[/math] gives the total number of comparisons. The expected number of comparisons is [math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] }[/math]. Due to Linearity of Expectation, [math]\displaystyle{ \mathbf{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] = \sum_{i=1}^n\sum_{j\gt i}\mathbf{E}\left[X_{ij}\right] }[/math]. Our next step is to analyze [math]\displaystyle{ \mathbf{E}\left[X_{ij}\right] }[/math] for each [math]\displaystyle{ \{i, j\} }[/math].

By the definition of expectation and [math]\displaystyle{ X_{ij} }[/math],

[math]\displaystyle{ \begin{align} \mathbf{E}\left[X_{ij}\right] &= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\ &= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}]. \end{align} }[/math]

We are going to bound this probability.

Claim 2: [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared if and only if one of them is chosen as pivot when they are still in the same subset.

This is easy to verify: just check the algorithm. The next one is a bit complicated.

Claim 3: If [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are still in the same subset then all [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] are in the same subset.

We can verify this by induction. Initially, [math]\displaystyle{ S }[/math] itself has the property described above; and partitioning any [math]\displaystyle{ S }[/math] with the property into [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math] will preserve the property for both [math]\displaystyle{ S_1 }[/math] and [math]\displaystyle{ S_2 }[/math]. Therefore Claim 3 holds.

Combining Claim 2 and 3, we have:

Claim 4: [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math] are compared only if one of [math]\displaystyle{ \{a_i, a_j\} }[/math] is chosen from [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math].

And apparently,

Claim 5: Every one of [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] is chosen equal-probably.

This is because our RandQSort chooses the pivot uniformly at random.

Claim 4 and Claim 5 together imply:

[math]\displaystyle{ \begin{align} \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] &\le \frac{2}{j-i+1}. \end{align} }[/math]
Remark: Perhaps you feel confused about the above argument. You may ask: "The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?" Good question! Let's see what really happens by looking closely.

For any pair [math]\displaystyle{ a_i }[/math] and [math]\displaystyle{ a_j }[/math], initially [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] are all in the same set [math]\displaystyle{ S }[/math] (obviously!). During the execution of the algorithm, the set which containing [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] are shrinking (due to the pivoting), until one of [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among [math]\displaystyle{ \{a_i, a_j\} }[/math]. So we really care about "the last" pivoting before [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] is split.

Formally, let [math]\displaystyle{ Y }[/math] be the random variable denoting the pivot element. We know that for each [math]\displaystyle{ a_k\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math], [math]\displaystyle{ Y=a_k }[/math] with the same probability, and [math]\displaystyle{ Y\not\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math] with an unknown probability (remember that there might be other elements in the same subset with [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math]). The probability we are looking for is actually [math]\displaystyle{ \Pr[Y\in \{a_i, a_j\}\mid Y\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}] }[/math], which is always [math]\displaystyle{ \frac{2}{j-i+1} }[/math], provided that [math]\displaystyle{ Y }[/math] is uniform over [math]\displaystyle{ \{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\} }[/math].

The conditional probability rules out the irrelevant events in a probabilistic argument.

Summing all up:

[math]\displaystyle{ \begin{align} \mathbf{E}\left[\sum_{i=1}^n\sum_{j\gt i}X_{ij}\right] &= \sum_{i=1}^n\sum_{j\gt i}\mathbf{E}\left[X_{ij}\right]\\ &\le \sum_{i=1}^n\sum_{j\gt i}\frac{2}{j-i+1}\\ &= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} & & (\mbox{Let }k=j-i+1)\\ &\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\ &= 2n\sum_{k=1}^{n}\frac{1}{k}\\ &= 2n H(n). \end{align} }[/math]

[math]\displaystyle{ H(n) }[/math] is the [math]\displaystyle{ n }[/math]th Harmonic number. It holds that

[math]\displaystyle{ \begin{align}H(n) = \ln n+O(1)\end{align} }[/math].

Therefore, for an arbitrary input [math]\displaystyle{ S }[/math] of [math]\displaystyle{ n }[/math] numbers, the expected number of comparisons taken by RandQSort to sort [math]\displaystyle{ S }[/math] is [math]\displaystyle{ \mathrm{O}(n\log n) }[/math].