<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://tcs.nju.edu.cn/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=114.212.208.2</id>
	<title>TCS Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://tcs.nju.edu.cn/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=114.212.208.2"/>
	<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=Special:Contributions/114.212.208.2"/>
	<updated>2026-04-30T10:57:40Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Variables_and_Expectations&amp;diff=4664</id>
		<title>随机算法 (Fall 2011)/Random Variables and Expectations</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Variables_and_Expectations&amp;diff=4664"/>
		<updated>2011-07-23T03:28:42Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Independent Random Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Random Variable=&lt;br /&gt;
{{Theorem|Definition (random variable)|&lt;br /&gt;
:A random variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; on a sample space &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is a real-valued function &amp;lt;math&amp;gt;X:\Omega\rightarrow\mathbb{R}&amp;lt;/math&amp;gt;. A random variable X is called a &#039;&#039;&#039;discrete&#039;&#039;&#039; random variable if its range is finite or countably infinite.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
For a random variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and a real value &amp;lt;math&amp;gt;x\in\mathbb{R}&amp;lt;/math&amp;gt;, we write &amp;quot;&amp;lt;math&amp;gt;X=x&amp;lt;/math&amp;gt;&amp;quot; for the event &amp;lt;math&amp;gt;\{a\in\Omega\mid X(a)=x\}&amp;lt;/math&amp;gt;, and denote the probability of the event by&lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr[X=x]=\Pr(\{a\in\Omega\mid X(a)=x\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Independent Random Variables=&lt;br /&gt;
The independence can also be defined for variables:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent variables)|&lt;br /&gt;
:Two random variables &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; are &#039;&#039;&#039;independent&#039;&#039;&#039; if and only if&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\Pr[(X=x)\wedge(Y=y)]=\Pr[X=x]\cdot\Pr[Y=y]&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:for all values &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt;. Random variables &amp;lt;math&amp;gt;X_1, X_2, \ldots, X_n&amp;lt;/math&amp;gt; are &#039;&#039;&#039;mutually independent&#039;&#039;&#039; if and only if, for any subset &amp;lt;math&amp;gt;I\subseteq\{1,2,\ldots,n\}&amp;lt;/math&amp;gt; and any values &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;i\in I&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{i\in I}\Pr[X_i=x_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Note that in probability theory, the &amp;quot;mutual independence&amp;quot; is &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; equivalent with &amp;quot;pair-wise independence&amp;quot;, which we will learn in the future.&lt;br /&gt;
&lt;br /&gt;
= Expectation =&lt;br /&gt;
Let &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; be a discrete &#039;&#039;&#039;random variable&#039;&#039;&#039;.  The expectation of &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; is defined as follows.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Expectation)|&lt;br /&gt;
:The &#039;&#039;&#039;expectation&#039;&#039;&#039; of a discrete random variable &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt;, denoted by &amp;lt;math&amp;gt;\mathbf{E}[X]&amp;lt;/math&amp;gt;, is given by&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}[X] &amp;amp;= \sum_{x}x\Pr[X=x],&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
:where the summation is over all values &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; in the range of &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== Linearity of Expectation ===&lt;br /&gt;
Perhaps the most useful property of expectation is its &#039;&#039;&#039;linearity&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Linearity of Expectations)|&lt;br /&gt;
:For any discrete random variables &amp;lt;math&amp;gt;X_1, X_2, \ldots, X_n&amp;lt;/math&amp;gt;, and any real constants &amp;lt;math&amp;gt;a_1, a_2, \ldots, a_n&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &amp;amp;= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| By the definition of the expectations, it is easy to verify that (try to prove by yourself):&lt;br /&gt;
for any discrete random variables &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt;, and any real constant &amp;lt;math&amp;gt;c&amp;lt;/math&amp;gt;,&lt;br /&gt;
* &amp;lt;math&amp;gt;\mathbf{E}[X+Y]=\mathbf{E}[X]+\mathbf{E}[Y]&amp;lt;/math&amp;gt;;&lt;br /&gt;
* &amp;lt;math&amp;gt;\mathbf{E}[cX]=c\mathbf{E}[X]&amp;lt;/math&amp;gt;.&lt;br /&gt;
The theorem follows by induction.&lt;br /&gt;
}}&lt;br /&gt;
The linearity of expectation gives an easy way to compute the expectation of a random variable if the variable can be written as a sum.&lt;br /&gt;
&lt;br /&gt;
;Example&lt;br /&gt;
: Supposed that we have a biased coin that the probability of HEADs is &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt;. Flipping the coin for n times, what is the expectation of number of HEADs?&lt;br /&gt;
: It looks straightforward that it must be np, but how can we prove it? Surely we can apply the definition of expectation to compute the expectation with brute force. A more convenient way is by the linearity of expectations: Let &amp;lt;math&amp;gt;X_i&amp;lt;/math&amp;gt; indicate whether the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th flip is HEADs. Then &amp;lt;math&amp;gt;\mathbf{E}[X_i]=1\cdot p+0\cdot(1-p)=p&amp;lt;/math&amp;gt;, and the total number of HEADs after n flips is &amp;lt;math&amp;gt;X=\sum_{i=1}^{n}X_i&amp;lt;/math&amp;gt;. Applying the linearity of expectation, the expected number of HEADs is:&lt;br /&gt;
::&amp;lt;math&amp;gt;\mathbf{E}[X]=\mathbf{E}\left[\sum_{i=1}^{n}X_i\right]=\sum_{i=1}^{n}\mathbf{E}[X_i]=np&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The real power of the linearity of expectations is that it does not require the random variables to be independent, thus can be applied to any set of random variables. For example:&lt;br /&gt;
:&amp;lt;math&amp;gt;\mathbf{E}\left[\alpha X+\beta X^2+\gamma X^3\right] = \alpha\cdot\mathbf{E}[X]+\beta\cdot\mathbf{E}\left[X^2\right]+\gamma\cdot\mathbf{E}\left[X^3\right].&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
However, do not exaggerate this power!&lt;br /&gt;
* For an arbitrary function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; (not necessarily linear), the equation &amp;lt;math&amp;gt;\mathbf{E}[f(X)]=f(\mathbf{E}[X])&amp;lt;/math&amp;gt; does &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; hold generally.&lt;br /&gt;
* For variances, the equation &amp;lt;math&amp;gt;var(X+Y)=var(X)+var(Y)&amp;lt;/math&amp;gt; does &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; hold without further assumption of the independence of &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Conditional Expectation =&lt;br /&gt;
&lt;br /&gt;
Conditional expectation can be accordingly defined:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (conditional expectation)|&lt;br /&gt;
:For random variables &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbf{E}[X\mid Y=y]=\sum_{x}x\Pr[X=x\mid Y=y],&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:where the summation is taken over the range of &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== The Law of Total Expectation ===&lt;br /&gt;
There is also a &#039;&#039;&#039;law of total expectation&#039;&#039;&#039;.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (law of total expectation)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;X&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; be two random variables. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\mathbf{E}[X]=\sum_{y}\mathbf{E}[X\mid Y=y]\cdot\Pr[Y=y].&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4595</id>
		<title>随机算法 (Fall 2011)/Verifying Matrix Multiplication</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4595"/>
		<updated>2011-07-23T03:25:34Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Freivalds Algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=  Verifying Matrix Multiplication=&lt;br /&gt;
Consider the following problem:&lt;br /&gt;
* &#039;&#039;&#039;Input&#039;&#039;&#039;: Three &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrices &amp;lt;math&amp;gt;A,B&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Output&#039;&#039;&#039;: return &amp;quot;yes&amp;quot; if &amp;lt;math&amp;gt;C=AB&amp;lt;/math&amp;gt; and &amp;quot;no&amp;quot; if otherwise.&lt;br /&gt;
&lt;br /&gt;
A naive way of checking the equality is first computing &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and then comparing the result with &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;. The (asymptotically) fastest matrix multiplication algorithm known today runs in time &amp;lt;math&amp;gt;O(n^{2.376})&amp;lt;/math&amp;gt;. The naive algorithm will take asymptotically the same amount of time.&lt;br /&gt;
&lt;br /&gt;
= Freivalds Algorithm =&lt;br /&gt;
The following is a very simple randomized algorithm, due to Freivalds, running in only &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; time:&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Algorithm (Freivalds)|&lt;br /&gt;
*pick a vector &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
The product &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt; is computed by first multiplying &amp;lt;math&amp;gt;Br&amp;lt;/math&amp;gt; and then &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt;.&lt;br /&gt;
The running time is &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; because the algorithm does 3 matrix-vector multiplications in total. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;, thus the algorithm will return a &amp;quot;yes&amp;quot; for any positive instance (&amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;). &lt;br /&gt;
But if &amp;lt;math&amp;gt;AB \neq C&amp;lt;/math&amp;gt; then the algorithm will make a mistake if it chooses such an &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;ABr = Cr&amp;lt;/math&amp;gt;. However, the following lemma states that the probability of this event is bounded.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt; then for a uniformly random &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[ABr = Cr]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;D=AB-C&amp;lt;/math&amp;gt;. The event &amp;lt;math&amp;gt;ABr=Cr&amp;lt;/math&amp;gt; is equivalent to that &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. It is then sufficient to show that for a &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\Pr[Dr = \boldsymbol{0}]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it must have at least one non-zero entry. Suppose that &amp;lt;math&amp;gt;D(i,j)\neq 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We assume the event that &amp;lt;math&amp;gt;Dr=\boldsymbol{0}&amp;lt;/math&amp;gt;. In particular, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th entry of &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt; is &lt;br /&gt;
:&amp;lt;math&amp;gt;(Dr)_{i}=\sum_{k=1}^nD(i,k)r_k=0.&amp;lt;/math&amp;gt; &lt;br /&gt;
The &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; can be calculated by&lt;br /&gt;
:&amp;lt;math&amp;gt;r_j=-\frac{1}{D(i,j)}\sum_{k\neq j}^nD(i,k)r_k.&amp;lt;/math&amp;gt;&lt;br /&gt;
Once all &amp;lt;math&amp;gt;r_k&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;k\neq j&amp;lt;/math&amp;gt; are fixed, there is a unique solution of &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt;. That is to say, conditioning on any &amp;lt;math&amp;gt;r_k, k\neq j&amp;lt;/math&amp;gt;, there is at most &#039;&#039;&#039;one&#039;&#039;&#039; value of &amp;lt;math&amp;gt;r_j\in\{0,1\}&amp;lt;/math&amp;gt; satisfying &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. On the other hand, observe that &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; is chosen from &#039;&#039;&#039;two&#039;&#039;&#039; values &amp;lt;math&amp;gt;\{0,1\}&amp;lt;/math&amp;gt; uniformly and independently at random. Therefore, with at least &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt; probability, the choice of &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; fails to give us a zero &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt;. That is, &amp;lt;math&amp;gt;\Pr[ABr=Cr]=\Pr[Dr=0]\le\frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, Freivalds algorithm always returns &amp;quot;yes&amp;quot;; and when &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, Freivalds algorithm returns &amp;quot;no&amp;quot; with probability at least 1/2.&lt;br /&gt;
&lt;br /&gt;
To improve its accuracy, we can run Freivalds algorithm for &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; times, each time with an &#039;&#039;independent&#039;&#039; random &amp;lt;math&amp;gt;r\in\{0,1\}^n&amp;lt;/math&amp;gt;, and return &amp;quot;yes&amp;quot; iff all running instances pass the test.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Freivalds&#039; Algorithm (multi-round)|&lt;br /&gt;
*pick &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;r_1,r_2,\ldots,r_k \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly and independently at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br_i) = Cr_i&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i=1,\ldots,k&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, then the algorithm returns a &amp;quot;yes&amp;quot; with probability 1. If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, then due to the independence, the probability that all &amp;lt;math&amp;gt;r_i&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;ABr_i=C_i&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;2^{-k}&amp;lt;/math&amp;gt;, so the algorithm returns &amp;quot;no&amp;quot; with probability at least &amp;lt;math&amp;gt;1-2^{-k}&amp;lt;/math&amp;gt;. Choose &amp;lt;math&amp;gt;k=O(\log n)&amp;lt;/math&amp;gt;. The algorithm runs in time &amp;lt;math&amp;gt;O(n^2\log n)&amp;lt;/math&amp;gt; and has a one-sided error (false positive) bounded by &amp;lt;math&amp;gt;\frac{1}{\mathrm{poly}(n)}&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4594</id>
		<title>随机算法 (Fall 2011)/Verifying Matrix Multiplication</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4594"/>
		<updated>2011-07-23T03:25:01Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Freivalds Algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=  Verifying Matrix Multiplication=&lt;br /&gt;
Consider the following problem:&lt;br /&gt;
* &#039;&#039;&#039;Input&#039;&#039;&#039;: Three &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrices &amp;lt;math&amp;gt;A,B&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Output&#039;&#039;&#039;: return &amp;quot;yes&amp;quot; if &amp;lt;math&amp;gt;C=AB&amp;lt;/math&amp;gt; and &amp;quot;no&amp;quot; if otherwise.&lt;br /&gt;
&lt;br /&gt;
A naive way of checking the equality is first computing &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and then comparing the result with &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;. The (asymptotically) fastest matrix multiplication algorithm known today runs in time &amp;lt;math&amp;gt;O(n^{2.376})&amp;lt;/math&amp;gt;. The naive algorithm will take asymptotically the same amount of time.&lt;br /&gt;
&lt;br /&gt;
= Freivalds Algorithm =&lt;br /&gt;
The following is a very simple randomized algorithm, due to Freivalds, running in only &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; time:&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Algorithm (Freivalds)|&lt;br /&gt;
*pick a vector &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
The product &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt; is computed by first multiplying &amp;lt;math&amp;gt;Br&amp;lt;/math&amp;gt; and then &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt;.&lt;br /&gt;
The running time is &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; because the algorithm does 3 matrix-vector multiplications in total. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;, thus the algorithm will return a &amp;quot;yes&amp;quot; for any positive instance (&amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;). &lt;br /&gt;
But if &amp;lt;math&amp;gt;AB \neq C&amp;lt;/math&amp;gt; then the algorithm will make a mistake if it chooses such an &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;ABr = Cr&amp;lt;/math&amp;gt;. However, the following lemma states that the probability of this event is bounded.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt; then for a uniformly random &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[ABr = Cr]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;D=AB-C&amp;lt;/math&amp;gt;. The event &amp;lt;math&amp;gt;ABr=Cr&amp;lt;/math&amp;gt; is equivalent to that &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. It is then sufficient to show that for a &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\Pr[Dr = \boldsymbol{0}]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it must have at least one non-zero entry. Suppose that &amp;lt;math&amp;gt;D(i,j)\neq 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We assume the event that &amp;lt;math&amp;gt;Dr=\boldsymbol{0}&amp;lt;/math&amp;gt;. In particular, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th entry of &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt; is &lt;br /&gt;
:&amp;lt;math&amp;gt;(Dr)_{i}=\sum_{k=1}^nD(i,k)r_k=0.&amp;lt;/math&amp;gt; &lt;br /&gt;
The &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; can be calculated by&lt;br /&gt;
:&amp;lt;math&amp;gt;r_j=-\frac{1}{D(i,j)}\sum_{k\neq j}^nD(i,k)r_k.&amp;lt;/math&amp;gt;&lt;br /&gt;
Once all &amp;lt;math&amp;gt;r_k&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;k\neq j&amp;lt;/math&amp;gt; are fixed, there is a unique solution of &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt;. That is to say, conditioning on any &amp;lt;math&amp;gt;r_k, k\neq j&amp;lt;/math&amp;gt;, there is at most &#039;&#039;&#039;one&#039;&#039;&#039; value of &amp;lt;math&amp;gt;r_j\in\{0,1\}&amp;lt;/math&amp;gt; satisfying &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. On the other hand, observe that &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; is chosen from &#039;&#039;&#039;two&#039;&#039;&#039; values &amp;lt;math&amp;gt;\{0,1\}&amp;lt;/math&amp;gt; uniformly and independently at random. Therefore, with at least &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt; probability, the choice of &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; fails to give us a zero &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt;. That is, &amp;lt;math&amp;gt;\Pr[ABr=Cr]=\Pr[Dr=0]\le\frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, Freivalds algorithm always returns &amp;quot;yes&amp;quot;; and when &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, Freivalds algorithm returns &amp;quot;no&amp;quot; with probability at least 1/2.&lt;br /&gt;
&lt;br /&gt;
To improve its accuracy, we can run Freivalds algorithm for &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; times, each time with an &#039;&#039;independent&#039;&#039; random &amp;lt;math&amp;gt;r\in\{0,1\}^n&amp;lt;/math&amp;gt;, and return &amp;quot;yes&amp;quot; iff all running instances pass the test.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Freivalds&#039; Algorithms|&lt;br /&gt;
*pick &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;r_1,r_2,\ldots,r_k \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly and independently at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br_i) = Cr_i&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i=1,\ldots,k&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, then the algorithm returns a &amp;quot;yes&amp;quot; with probability 1. If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, then due to the independence, the probability that all &amp;lt;math&amp;gt;r_i&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;ABr_i=C_i&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;2^{-k}&amp;lt;/math&amp;gt;, so the algorithm returns &amp;quot;no&amp;quot; with probability at least &amp;lt;math&amp;gt;1-2^{-k}&amp;lt;/math&amp;gt;. Choose &amp;lt;math&amp;gt;k=O(\log n)&amp;lt;/math&amp;gt;. The algorithm runs in time &amp;lt;math&amp;gt;O(n^2\log n)&amp;lt;/math&amp;gt; and has a one-sided error (false positive) bounded by &amp;lt;math&amp;gt;\frac{1}{\mathrm{poly}(n)}&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4593</id>
		<title>随机算法 (Fall 2011)/Verifying Matrix Multiplication</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Verifying_Matrix_Multiplication&amp;diff=4593"/>
		<updated>2011-07-23T03:10:01Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Freivalds Algorithm */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=  Verifying Matrix Multiplication=&lt;br /&gt;
Consider the following problem:&lt;br /&gt;
* &#039;&#039;&#039;Input&#039;&#039;&#039;: Three &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrices &amp;lt;math&amp;gt;A,B&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;.&lt;br /&gt;
* &#039;&#039;&#039;Output&#039;&#039;&#039;: return &amp;quot;yes&amp;quot; if &amp;lt;math&amp;gt;C=AB&amp;lt;/math&amp;gt; and &amp;quot;no&amp;quot; if otherwise.&lt;br /&gt;
&lt;br /&gt;
A naive way of checking the equality is first computing &amp;lt;math&amp;gt;AB&amp;lt;/math&amp;gt; and then comparing the result with &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;. The (asymptotically) fastest matrix multiplication algorithm known today runs in time &amp;lt;math&amp;gt;O(n^{2.376})&amp;lt;/math&amp;gt;. The naive algorithm will take asymptotically the same amount of time.&lt;br /&gt;
&lt;br /&gt;
= Freivalds Algorithm =&lt;br /&gt;
The following is a very simple randomized algorithm, due to Freivalds, running in only &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; time:&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Algorithm (Freivalds)|&lt;br /&gt;
*pick a vector &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
The product &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt; is computed by first multiplying &amp;lt;math&amp;gt;Br&amp;lt;/math&amp;gt; and then &amp;lt;math&amp;gt;A(Br)&amp;lt;/math&amp;gt;.&lt;br /&gt;
The running time is &amp;lt;math&amp;gt;O(n^2)&amp;lt;/math&amp;gt; because the algorithm does 3 matrix-vector multiplications in total. &lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt; then &amp;lt;math&amp;gt;A(Br) = Cr&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;, thus the algorithm will return a &amp;quot;yes&amp;quot; for any positive instance (&amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;). &lt;br /&gt;
But if &amp;lt;math&amp;gt;AB \neq C&amp;lt;/math&amp;gt; then the algorithm will make a mistake if it chooses such an &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;ABr = Cr&amp;lt;/math&amp;gt;. However, the following lemma states that the probability of this event is bounded.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt; then for a uniformly random &amp;lt;math&amp;gt;r \in\{0, 1\}^n&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[ABr = Cr]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;D=AB-C&amp;lt;/math&amp;gt;. The event &amp;lt;math&amp;gt;ABr=Cr&amp;lt;/math&amp;gt; is equivalent to that &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. It is then sufficient to show that for a &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\Pr[Dr = \boldsymbol{0}]\le \frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Since &amp;lt;math&amp;gt;D\neq \boldsymbol{0}&amp;lt;/math&amp;gt;, it must have at least one non-zero entry. Suppose that &amp;lt;math&amp;gt;D(i,j)\neq 0&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We assume the event that &amp;lt;math&amp;gt;Dr=\boldsymbol{0}&amp;lt;/math&amp;gt;. In particular, the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th entry of &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt; is &lt;br /&gt;
:&amp;lt;math&amp;gt;(Dr)_{i}=\sum_{k=1}^nD(i,k)r_k=0.&amp;lt;/math&amp;gt; &lt;br /&gt;
The &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; can be calculated by&lt;br /&gt;
:&amp;lt;math&amp;gt;r_j=-\frac{1}{D(i,j)}\sum_{k\neq j}^nD(i,k)r_k.&amp;lt;/math&amp;gt;&lt;br /&gt;
Once all &amp;lt;math&amp;gt;r_k&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;k\neq j&amp;lt;/math&amp;gt; are fixed, there is a unique solution of &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt;. That is to say, conditioning on any &amp;lt;math&amp;gt;r_k, k\neq j&amp;lt;/math&amp;gt;, there is at most &#039;&#039;&#039;one&#039;&#039;&#039; value of &amp;lt;math&amp;gt;r_j\in\{0,1\}&amp;lt;/math&amp;gt; satisfying &amp;lt;math&amp;gt;Dr=0&amp;lt;/math&amp;gt;. On the other hand, observe that &amp;lt;math&amp;gt;r_j&amp;lt;/math&amp;gt; is chosen from &#039;&#039;&#039;two&#039;&#039;&#039; values &amp;lt;math&amp;gt;\{0,1\}&amp;lt;/math&amp;gt; uniformly and independently at random. Therefore, with at least &amp;lt;math&amp;gt;\frac{1}{2}&amp;lt;/math&amp;gt; probability, the choice of &amp;lt;math&amp;gt;r&amp;lt;/math&amp;gt; fails to give us a zero &amp;lt;math&amp;gt;Dr&amp;lt;/math&amp;gt;. That is, &amp;lt;math&amp;gt;\Pr[ABr=Cr]=\Pr[Dr=0]\le\frac{1}{2}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
When &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, Freivalds algorithm always returns &amp;quot;yes&amp;quot;; and when &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, Freivalds algorithm returns &amp;quot;no&amp;quot; with probability at least 1/2.&lt;br /&gt;
&lt;br /&gt;
To improve its accuracy, we can run Freivalds algorithm for &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; times, each time with an &#039;&#039;independent&#039;&#039; random &amp;lt;math&amp;gt;r\in\{0,1\}^n&amp;lt;/math&amp;gt;, and return &amp;quot;yes&amp;quot; iff all running instances pass the test.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Freivalds&#039; Algorithms|&lt;br /&gt;
*pick &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; vectors &amp;lt;math&amp;gt;r_1,r_2,\ldots,r_k \in\{0, 1\}^n&amp;lt;/math&amp;gt; uniformly and independently at random;&lt;br /&gt;
*if &amp;lt;math&amp;gt;A(Br_i) = Cr_i&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;i=1,\ldots,k&amp;lt;/math&amp;gt; then return &amp;quot;yes&amp;quot; else return &amp;quot;no&amp;quot;;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
If &amp;lt;math&amp;gt;AB=C&amp;lt;/math&amp;gt;, then the algorithm returns a &amp;quot;yes&amp;quot; with probability 1. If &amp;lt;math&amp;gt;AB\neq C&amp;lt;/math&amp;gt;, then due to the independence, the probability that all &amp;lt;math&amp;gt;r_i&amp;lt;/math&amp;gt; have &amp;lt;math&amp;gt;ABr_i=C_i&amp;lt;/math&amp;gt; is at most &amp;lt;math&amp;gt;2^{-k}&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4660</id>
		<title>随机算法 (Fall 2011)/Probability Space</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4660"/>
		<updated>2011-07-22T15:24:22Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Axioms of Probability=&lt;br /&gt;
The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Definition (Probability Space)|&lt;br /&gt;
A &#039;&#039;&#039;probability space&#039;&#039;&#039; is a triple &amp;lt;math&amp;gt;(\Omega,\Sigma,\Pr)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is a set, called the &#039;&#039;&#039;sample space&#039;&#039;&#039;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Sigma\subseteq 2^{\Omega}&amp;lt;/math&amp;gt; is the set of all &#039;&#039;&#039;events&#039;&#039;&#039;, satisfying:&lt;br /&gt;
*:(A1). &amp;lt;math&amp;gt;\Omega\in\Sigma&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\empty\in\Sigma&amp;lt;/math&amp;gt;. (The &#039;&#039;certain&#039;&#039; event and the &#039;&#039;impossible&#039;&#039; event.)&lt;br /&gt;
*:(A2). If &amp;lt;math&amp;gt;A,B\in\Sigma&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A\cap B, A\cup B, A-B\in\Sigma&amp;lt;/math&amp;gt;. (Intersection, union, and diference of two events are events).&lt;br /&gt;
* A &#039;&#039;&#039;probability measure&#039;&#039;&#039; &amp;lt;math&amp;gt;\Pr:\Sigma\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; is a function that maps each event to a nonnegative real number, satisfying&lt;br /&gt;
*:(A3). &amp;lt;math&amp;gt;\Pr(\Omega)=1&amp;lt;/math&amp;gt;.&lt;br /&gt;
*:(A4). If &amp;lt;math&amp;gt;A\cap B=\emptyset&amp;lt;/math&amp;gt; (such events are call &#039;&#039;disjoint&#039;&#039; events), then &amp;lt;math&amp;gt;\Pr(A\cup B)=\Pr(A)+\Pr(B)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*:(A5*). For a decreasing sequence of events &amp;lt;math&amp;gt;A_1\supset A_2\supset \cdots\supset A_n\supset\cdots&amp;lt;/math&amp;gt; of events with &amp;lt;math&amp;gt;\bigcap_n A_n=\emptyset&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\lim_{n\rightarrow \infty}\Pr(A_n)=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
The sample space &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is the set of all possible outcomes of the random process modeled by the probability space. An event is a subset of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. The statements (A1)--(A5) are axioms of probability. A probability space is well defined as long as these axioms are satisfied.&lt;br /&gt;
;Example&lt;br /&gt;
:Consider the probability space defined by rolling a dice with six faces. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the power set &amp;lt;math&amp;gt;2^{\Omega}&amp;lt;/math&amp;gt;. For any event &amp;lt;math&amp;gt;A\in\Sigma&amp;lt;/math&amp;gt;, its probability is given by &amp;lt;math&amp;gt;\Pr(A)=\frac{|A|}{6}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
;Remark&lt;br /&gt;
* In general, the set &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; may be continuous, but we only consider &#039;&#039;&#039;discrete&#039;&#039;&#039; probability in this lecture, thus we assume that &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is either finite or countably infinite.&lt;br /&gt;
* In many cases (such as the above example), &amp;lt;math&amp;gt;\Sigma=2^{\Omega}&amp;lt;/math&amp;gt;, i.e. the events enumerates all subsets of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. But in general, a probability space is well-defined by any &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; satisfying (A1) and (A2). Such &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is called a &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt;-algebra defined on &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The last axiom (A5*) is redundant if &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is finite, thus it is only essential when there are infinitely many events. The role of axiom (A5*) in probability theory is like [http://en.wikipedia.org/wiki/Zorn&#039;s_lemma Zorn&#039;s Lemma] (or equivalently the [http://en.wikipedia.org/wiki/Axiom_of_choice Axiom of Choice]) in axiomatic set theory.&lt;br /&gt;
&lt;br /&gt;
Laws for probability can be deduced from the above axiom system. Denote that &amp;lt;math&amp;gt;\bar{A}=\Omega-A&amp;lt;/math&amp;gt;.&lt;br /&gt;
{{Theorem|Proposition|&lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr(\bar{A})=1-\Pr(A)&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof|&lt;br /&gt;
Due to Axiom (A4), &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=\Pr(\Omega)&amp;lt;/math&amp;gt; which is equal to 1 according to Axiom (A3), thus &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=1&amp;lt;/math&amp;gt;. The proposition follows.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Exercise: Deduce other useful laws for probability from the axioms. For example, &amp;lt;math&amp;gt;A\subseteq B\Longrightarrow\Pr(A)\le\Pr(B)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Notation =&lt;br /&gt;
An event &amp;lt;math&amp;gt;A\subseteq\Omega&amp;lt;/math&amp;gt; can be represented as &amp;lt;math&amp;gt;A=\{a\in\Omega\mid \mathcal{E}(a)\}&amp;lt;/math&amp;gt; with a predicate &amp;lt;math&amp;gt;\mathcal{E}&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The predicate notation of probability is &lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
;Example&lt;br /&gt;
: We still consider the probability space by rolling a six-face dice. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;. Consider the event that the outcome is odd.&lt;br /&gt;
:: &amp;lt;math&amp;gt;\Pr[\text{ the outcome is odd }]=\Pr(\{1,3,5\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
During the lecture, we mostly use the predicate notation instead of subset notation.&lt;br /&gt;
&lt;br /&gt;
= The Union Bound =&lt;br /&gt;
We are familiar with the [http://en.wikipedia.org/wiki/Inclusion–exclusion_principle principle of inclusion-exclusion] for finite sets.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;S_1, S_2, \ldots, S_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; finite sets. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left|\bigcup_{1\le i\le n}S_i\right|&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n|S_i|&lt;br /&gt;
-\sum_{i&amp;lt;j}|S_i\cap S_j|&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}|S_i\cap S_j\cap S_k|\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\left|\bigcap_{r=1}^\ell S_{i_r}\right|&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1} \left|\bigcap_{i=1}^n S_i\right|.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The principle can be generalized to probability events.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion for Probability|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i]&lt;br /&gt;
-\sum_{i&amp;lt;j}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j]&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j\wedge \mathcal{E}_k]\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\Pr\left[\bigwedge_{r=1}^\ell \mathcal{E}_{i_r}\right]&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1}\Pr\left[\bigwedge_{i=1}^n \mathcal{E}_{i}\right].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
We only prove the basic case for two events.&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:For any two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[\mathcal{E}_1\vee\mathcal{E}_2]=\Pr[\mathcal{E}_1]+\Pr[\mathcal{E}_2]-\Pr[\mathcal{E}_1\wedge\mathcal{E}_2]&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| The followings are due to Axiom (A4).&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[\mathcal{E}_1]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_1\vee\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
The lemma follows directly.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A direct consequence of the lemma is the following theorem, the &#039;&#039;&#039;union bound&#039;&#039;&#039;.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Union Bound)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;\le&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
The name of this inequality is [http://en.wikipedia.org/wiki/Boole&#039;s_inequality Boole&#039;s inequality]. It is usually referred by its nickname the &amp;quot;union bound&amp;quot;. The bound holds for arbitrary events, even if they are dependent. Due to this generality, the union bound is extremely useful in probabilistic analysis.&lt;br /&gt;
&lt;br /&gt;
= Independence =&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt; are &#039;&#039;&#039;independent&#039;&#039;&#039; if and only if &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\mathcal{E}_1 \wedge \mathcal{E}_2\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\Pr[\mathcal{E}_1]\cdot\Pr[\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
This definition can be generalized to any number of events:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Events &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; are &#039;&#039;&#039;mutually independent&#039;&#039;&#039; if and only if, for any subset &amp;lt;math&amp;gt;I\subseteq\{1,2,\ldots,n\}&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{i\in I}\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Note that in probability theory, the &amp;quot;mutual independence&amp;quot; is &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; equivalent with &amp;quot;pair-wise independence&amp;quot;, which we will learn in the future.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4659</id>
		<title>随机算法 (Fall 2011)/Probability Space</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4659"/>
		<updated>2011-07-22T15:23:46Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Axioms of Probability */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Axioms of Probability=&lt;br /&gt;
The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Definition (Probability Space)|&lt;br /&gt;
A &#039;&#039;&#039;probability space&#039;&#039;&#039; is a triple &amp;lt;math&amp;gt;(\Omega,\Sigma,\Pr)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is a set, called the &#039;&#039;&#039;sample space&#039;&#039;&#039;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Sigma\subseteq 2^{\Omega}&amp;lt;/math&amp;gt; is the set of all &#039;&#039;&#039;events&#039;&#039;&#039;, satisfying:&lt;br /&gt;
*:(A1). &amp;lt;math&amp;gt;\Omega\in\Sigma&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\empty\in\Sigma&amp;lt;/math&amp;gt;. (The &#039;&#039;certain&#039;&#039; event and the &#039;&#039;impossible&#039;&#039; event.)&lt;br /&gt;
*:(A2). If &amp;lt;math&amp;gt;A,B\in\Sigma&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A\cap B, A\cup B, A-B\in\Sigma&amp;lt;/math&amp;gt;. (Intersection, union, and diference of two events are events).&lt;br /&gt;
* A &#039;&#039;&#039;probability measure&#039;&#039;&#039; &amp;lt;math&amp;gt;\Pr:\Sigma\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; is a function that maps each event to a nonnegative real number, satisfying&lt;br /&gt;
*:(A3). &amp;lt;math&amp;gt;\Pr(\Omega)=1&amp;lt;/math&amp;gt;.&lt;br /&gt;
*:(A4). If &amp;lt;math&amp;gt;A\cap B=\emptyset&amp;lt;/math&amp;gt; (such events are call &#039;&#039;disjoint&#039;&#039; events), then &amp;lt;math&amp;gt;\Pr(A\cup B)=\Pr(A)+\Pr(B)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*:(A5*). For a decreasing sequence of events &amp;lt;math&amp;gt;A_1\supset A_2\supset \cdots\supset A_n\supset\cdots&amp;lt;/math&amp;gt; of events with &amp;lt;math&amp;gt;\bigcap_n A_n=\emptyset&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\lim_{n\rightarrow \infty}\Pr(A_n)=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
The sample space &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is the set of all possible outcomes of the random process modeled by the probability space. An event is a subset of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. The statements (A1)--(A5) are axioms of probability. A probability space is well defined as long as these axioms are satisfied.&lt;br /&gt;
;Example&lt;br /&gt;
:Consider the probability space defined by rolling a dice with six faces. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is the power set &amp;lt;math&amp;gt;2^{\Omega}&amp;lt;/math&amp;gt;. For any event &amp;lt;math&amp;gt;A\in\Sigma&amp;lt;/math&amp;gt;, its probability is given by &amp;lt;math&amp;gt;\Pr(A)=\frac{|A|}{6}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;Remark&lt;br /&gt;
* In general, the set &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; may be continuous, but we only consider &#039;&#039;&#039;discrete&#039;&#039;&#039; probability in this lecture, thus we assume that &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is either finite or countably infinite.&lt;br /&gt;
* In many cases (such as the above example), &amp;lt;math&amp;gt;\Sigma=2^{\Omega}&amp;lt;/math&amp;gt;, i.e. the events enumerates all subsets of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. But in general, a probability space is well-defined by any &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; satisfying (A1) and (A2). Such &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is called a &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt;-algebra defined on &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The last axiom (A5*) is redundant if &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is finite, thus it is only essential when there are infinitely many events. The role of axiom (A5*) in probability theory is like [http://en.wikipedia.org/wiki/Zorn&#039;s_lemma Zorn&#039;s Lemma] (or equivalently the [http://en.wikipedia.org/wiki/Axiom_of_choice Axiom of Choice]) in axiomatic set theory.&lt;br /&gt;
&lt;br /&gt;
Laws for probability can be deduced from the above axiom system. Denote that &amp;lt;math&amp;gt;\bar{A}=\Omega-A&amp;lt;/math&amp;gt;.&lt;br /&gt;
{{Theorem|Proposition|&lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr(\bar{A})=1-\Pr(A)&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof|&lt;br /&gt;
Due to Axiom (A4), &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=\Pr(\Omega)&amp;lt;/math&amp;gt; which is equal to 1 according to Axiom (A3), thus &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=1&amp;lt;/math&amp;gt;. The proposition follows.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Exercise: Deduce other useful laws for probability from the axioms. For example, &amp;lt;math&amp;gt;A\subseteq B\Longrightarrow\Pr(A)\le\Pr(B)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Notation =&lt;br /&gt;
An event &amp;lt;math&amp;gt;A\subseteq\Omega&amp;lt;/math&amp;gt; can be represented as &amp;lt;math&amp;gt;A=\{a\in\Omega\mid \mathcal{E}(a)\}&amp;lt;/math&amp;gt; with a predicate &amp;lt;math&amp;gt;\mathcal{E}&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The predicate notation of probability is &lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
;Example&lt;br /&gt;
: We still consider the probability space by rolling a six-face dice. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;. Consider the event that the outcome is odd.&lt;br /&gt;
:: &amp;lt;math&amp;gt;\Pr[\text{ the outcome is odd }]=\Pr(\{1,3,5\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
During the lecture, we mostly use the predicate notation instead of subset notation.&lt;br /&gt;
&lt;br /&gt;
= The Union Bound =&lt;br /&gt;
We are familiar with the [http://en.wikipedia.org/wiki/Inclusion–exclusion_principle principle of inclusion-exclusion] for finite sets.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;S_1, S_2, \ldots, S_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; finite sets. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left|\bigcup_{1\le i\le n}S_i\right|&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n|S_i|&lt;br /&gt;
-\sum_{i&amp;lt;j}|S_i\cap S_j|&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}|S_i\cap S_j\cap S_k|\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\left|\bigcap_{r=1}^\ell S_{i_r}\right|&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1} \left|\bigcap_{i=1}^n S_i\right|.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The principle can be generalized to probability events.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion for Probability|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i]&lt;br /&gt;
-\sum_{i&amp;lt;j}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j]&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j\wedge \mathcal{E}_k]\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\Pr\left[\bigwedge_{r=1}^\ell \mathcal{E}_{i_r}\right]&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1}\Pr\left[\bigwedge_{i=1}^n \mathcal{E}_{i}\right].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
We only prove the basic case for two events.&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:For any two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[\mathcal{E}_1\vee\mathcal{E}_2]=\Pr[\mathcal{E}_1]+\Pr[\mathcal{E}_2]-\Pr[\mathcal{E}_1\wedge\mathcal{E}_2]&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| The followings are due to Axiom (A4).&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[\mathcal{E}_1]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_1\vee\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
The lemma follows directly.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A direct consequence of the lemma is the following theorem, the &#039;&#039;&#039;union bound&#039;&#039;&#039;.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Union Bound)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;\le&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
The name of this inequality is [http://en.wikipedia.org/wiki/Boole&#039;s_inequality Boole&#039;s inequality]. It is usually referred by its nickname the &amp;quot;union bound&amp;quot;. The bound holds for arbitrary events, even if they are dependent. Due to this generality, the union bound is extremely useful in probabilistic analysis.&lt;br /&gt;
&lt;br /&gt;
= Independence =&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt; are &#039;&#039;&#039;independent&#039;&#039;&#039; if and only if &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\mathcal{E}_1 \wedge \mathcal{E}_2\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\Pr[\mathcal{E}_1]\cdot\Pr[\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
This definition can be generalized to any number of events:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Events &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; are &#039;&#039;&#039;mutually independent&#039;&#039;&#039; if and only if, for any subset &amp;lt;math&amp;gt;I\subseteq\{1,2,\ldots,n\}&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{i\in I}\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Note that in probability theory, the &amp;quot;mutual independence&amp;quot; is &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; equivalent with &amp;quot;pair-wise independence&amp;quot;, which we will learn in the future.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4658</id>
		<title>随机算法 (Fall 2011)/Probability Space</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Probability_Space&amp;diff=4658"/>
		<updated>2011-07-22T15:18:08Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Axioms of Probability */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Axioms of Probability=&lt;br /&gt;
The axiom foundation of probability theory is laid by [http://en.wikipedia.org/wiki/Andrey_Kolmogorov Kolmogorov], one of the greatest mathematician of the 20th century, who advanced various very different fields of mathematics.&lt;br /&gt;
&lt;br /&gt;
{{Theorem|Definition (Probability Space)|&lt;br /&gt;
A &#039;&#039;&#039;probability space&#039;&#039;&#039; is a triple &amp;lt;math&amp;gt;(\Omega,\Sigma,\Pr)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is a set, called the &#039;&#039;&#039;sample space&#039;&#039;&#039;. &lt;br /&gt;
*&amp;lt;math&amp;gt;\Sigma\subseteq 2^{\Omega}&amp;lt;/math&amp;gt; is the set of all &#039;&#039;&#039;events&#039;&#039;&#039;, satisfying:&lt;br /&gt;
*:(A1). &amp;lt;math&amp;gt;\Omega\in\Sigma&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\empty\in\Sigma&amp;lt;/math&amp;gt;. (The &#039;&#039;certain&#039;&#039; event and the &#039;&#039;impossible&#039;&#039; event.)&lt;br /&gt;
*:(A2). If &amp;lt;math&amp;gt;A,B\in\Sigma&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;A\cap B, A\cup B, A-B\in\Sigma&amp;lt;/math&amp;gt;. (Intersection, union, and diference of two events are events).&lt;br /&gt;
* A &#039;&#039;&#039;probability measure&#039;&#039;&#039; &amp;lt;math&amp;gt;\Pr:\Sigma\rightarrow\mathbb{R}&amp;lt;/math&amp;gt; is a function that maps each event to a nonnegative real number, satisfying&lt;br /&gt;
*:(A3). &amp;lt;math&amp;gt;\Pr(\Omega)=1&amp;lt;/math&amp;gt;.&lt;br /&gt;
*:(A4). If &amp;lt;math&amp;gt;A\cap B=\emptyset&amp;lt;/math&amp;gt; (such events are call &#039;&#039;disjoint&#039;&#039; events), then &amp;lt;math&amp;gt;\Pr(A\cup B)=\Pr(A)+\Pr(B)&amp;lt;/math&amp;gt;. &lt;br /&gt;
*:(A5*). For a decreasing sequence of events &amp;lt;math&amp;gt;A_1\supset A_2\supset \cdots\supset A_n\supset\cdots&amp;lt;/math&amp;gt; of events with &amp;lt;math&amp;gt;\bigcap_n A_n=\emptyset&amp;lt;/math&amp;gt;, it holds that &amp;lt;math&amp;gt;\lim_{n\rightarrow \infty}\Pr(A_n)=0&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
The sample space &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is the set of all possible outcomes of the random process modeled by the probability space. An event is a subset of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. The statements (A1)--(A5) are axioms of probability. A probability space is well defined as long as these axioms are satisfied.&lt;br /&gt;
;Example&lt;br /&gt;
:Consider the probability space defined by rolling a dice with six faces. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\Sigma=2^{\Omega}&amp;lt;/math&amp;gt;. For any event &amp;lt;math&amp;gt;A\in\Sigma&amp;lt;/math&amp;gt;, its probability is given by &amp;lt;math&amp;gt;\Pr(A)=\frac{|A|}{6}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
;Remark&lt;br /&gt;
* In general, the set &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; may be continuous, but we only consider &#039;&#039;&#039;discrete&#039;&#039;&#039; probability in this lecture, thus we assume that &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt; is either finite or countably infinite.&lt;br /&gt;
* In many cases (such as the above example), &amp;lt;math&amp;gt;\Sigma=2^{\Omega}&amp;lt;/math&amp;gt;, i.e. the events enumerates all subsets of &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;. But in general, a probability space is well-defined by any &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; satisfying (A1) and (A2). Such &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is called a &amp;lt;math&amp;gt;\sigma&amp;lt;/math&amp;gt;-algebra defined on &amp;lt;math&amp;gt;\Omega&amp;lt;/math&amp;gt;.&lt;br /&gt;
* The last axiom (A5*) is redundant if &amp;lt;math&amp;gt;\Sigma&amp;lt;/math&amp;gt; is finite, thus it is only essential when there are infinitely many events. The role of axiom (A5*) in probability theory is like [http://en.wikipedia.org/wiki/Zorn&#039;s_lemma Zorn&#039;s Lemma] (or equivalently the [http://en.wikipedia.org/wiki/Axiom_of_choice Axiom of Choice]) in axiomatic set theory.&lt;br /&gt;
&lt;br /&gt;
Laws for probability can be deduced from the above axiom system. Denote that &amp;lt;math&amp;gt;\bar{A}=\Omega-A&amp;lt;/math&amp;gt;.&lt;br /&gt;
{{Theorem|Proposition|&lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr(\bar{A})=1-\Pr(A)&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof|&lt;br /&gt;
Due to Axiom (A4), &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=\Pr(\Omega)&amp;lt;/math&amp;gt; which is equal to 1 according to Axiom (A3), thus &amp;lt;math&amp;gt;\Pr(\bar{A})+\Pr(A)=1&amp;lt;/math&amp;gt;. The proposition follows.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Exercise: Deduce other useful laws for probability from the axioms. For example, &amp;lt;math&amp;gt;A\subseteq B\Longrightarrow\Pr(A)\le\Pr(B)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
= Notation =&lt;br /&gt;
An event &amp;lt;math&amp;gt;A\subseteq\Omega&amp;lt;/math&amp;gt; can be represented as &amp;lt;math&amp;gt;A=\{a\in\Omega\mid \mathcal{E}(a)\}&amp;lt;/math&amp;gt; with a predicate &amp;lt;math&amp;gt;\mathcal{E}&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
The predicate notation of probability is &lt;br /&gt;
:&amp;lt;math&amp;gt;\Pr[\mathcal{E}]=\Pr(\{a\in\Omega\mid \mathcal{E}(a)\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
;Example&lt;br /&gt;
: We still consider the probability space by rolling a six-face dice. The sample space is &amp;lt;math&amp;gt;\Omega=\{1,2,3,4,5,6\}&amp;lt;/math&amp;gt;. Consider the event that the outcome is odd.&lt;br /&gt;
:: &amp;lt;math&amp;gt;\Pr[\text{ the outcome is odd }]=\Pr(\{1,3,5\})&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
During the lecture, we mostly use the predicate notation instead of subset notation.&lt;br /&gt;
&lt;br /&gt;
= The Union Bound =&lt;br /&gt;
We are familiar with the [http://en.wikipedia.org/wiki/Inclusion–exclusion_principle principle of inclusion-exclusion] for finite sets.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;S_1, S_2, \ldots, S_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; finite sets. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\left|\bigcup_{1\le i\le n}S_i\right|&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n|S_i|&lt;br /&gt;
-\sum_{i&amp;lt;j}|S_i\cap S_j|&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}|S_i\cap S_j\cap S_k|\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\left|\bigcap_{r=1}^\ell S_{i_r}\right|&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1} \left|\bigcap_{i=1}^n S_i\right|.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The principle can be generalized to probability events.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Principle of Inclusion-Exclusion for Probability|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i]&lt;br /&gt;
-\sum_{i&amp;lt;j}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j]&lt;br /&gt;
+\sum_{i&amp;lt;j&amp;lt;k}\Pr[\mathcal{E}_i\wedge \mathcal{E}_j\wedge \mathcal{E}_k]\\&lt;br /&gt;
&amp;amp; \quad -\cdots&lt;br /&gt;
+(-1)^{\ell-1}\sum_{i_1&amp;lt;i_2&amp;lt;\cdots&amp;lt;i_\ell}\Pr\left[\bigwedge_{r=1}^\ell \mathcal{E}_{i_r}\right]&lt;br /&gt;
+\cdots &lt;br /&gt;
+(-1)^{n-1}\Pr\left[\bigwedge_{i=1}^n \mathcal{E}_{i}\right].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
We only prove the basic case for two events.&lt;br /&gt;
{{Theorem|Lemma|&lt;br /&gt;
:For any two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\Pr[\mathcal{E}_1\vee\mathcal{E}_2]=\Pr[\mathcal{E}_1]+\Pr[\mathcal{E}_2]-\Pr[\mathcal{E}_1\wedge\mathcal{E}_2]&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| The followings are due to Axiom (A4).&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[\mathcal{E}_1]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2];\\&lt;br /&gt;
\Pr[\mathcal{E}_1\vee\mathcal{E}_2]&lt;br /&gt;
&amp;amp;=\Pr[\mathcal{E}_1\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_2\wedge\neg(\mathcal{E}_1\wedge\mathcal{E}_2)]+\Pr[\mathcal{E}_1\wedge\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
The lemma follows directly.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A direct consequence of the lemma is the following theorem, the &#039;&#039;&#039;union bound&#039;&#039;&#039;.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Union Bound)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; be &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; events. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigvee_{1\le i\le n}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;\le&lt;br /&gt;
\sum_{i=1}^n\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
The name of this inequality is [http://en.wikipedia.org/wiki/Boole&#039;s_inequality Boole&#039;s inequality]. It is usually referred by its nickname the &amp;quot;union bound&amp;quot;. The bound holds for arbitrary events, even if they are dependent. Due to this generality, the union bound is extremely useful in probabilistic analysis.&lt;br /&gt;
&lt;br /&gt;
= Independence =&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Two events &amp;lt;math&amp;gt;\mathcal{E}_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\mathcal{E}_2&amp;lt;/math&amp;gt; are &#039;&#039;&#039;independent&#039;&#039;&#039; if and only if &lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\mathcal{E}_1 \wedge \mathcal{E}_2\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\Pr[\mathcal{E}_1]\cdot\Pr[\mathcal{E}_2].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
This definition can be generalized to any number of events:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Definition (Independent events)|&lt;br /&gt;
:Events &amp;lt;math&amp;gt;\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n&amp;lt;/math&amp;gt; are &#039;&#039;&#039;mutually independent&#039;&#039;&#039; if and only if, for any subset &amp;lt;math&amp;gt;I\subseteq\{1,2,\ldots,n\}&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr\left[\bigwedge_{i\in I}\mathcal{E}_i\right]&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{i\in I}\Pr[\mathcal{E}_i].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Note that in probability theory, the &amp;quot;mutual independence&amp;quot; is &amp;lt;font color=&amp;quot;red&amp;quot;&amp;gt;not&amp;lt;/font&amp;gt; equivalent with &amp;quot;pair-wise independence&amp;quot;, which we will learn in the future.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Course_materials&amp;diff=4626</id>
		<title>随机算法 (Fall 2011)/Course materials</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Course_materials&amp;diff=4626"/>
		<updated>2011-07-21T03:42:44Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;= Course textbook = {|border=&amp;quot;2&amp;quot;  cellspacing=&amp;quot;4&amp;quot; cellpadding=&amp;quot;3&amp;quot; rules=&amp;quot;all&amp;quot; style=&amp;quot;margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;&amp;quot; |…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Course textbook =&lt;br /&gt;
{|border=&amp;quot;2&amp;quot;  cellspacing=&amp;quot;4&amp;quot; cellpadding=&amp;quot;3&amp;quot; rules=&amp;quot;all&amp;quot; style=&amp;quot;margin:1em 1em 1em 0; border:solid 1px #AAAAAA; border-collapse:collapse;empty-cells:show;&amp;quot;&lt;br /&gt;
|[[File:MR-randomized-algorithms.png‎|border|100px]]||&lt;br /&gt;
Rajeev Motwani and Prabhakar Raghavan, &#039;&#039;&#039;&#039;&#039;Randomized Algorithms&#039;&#039;&#039;&#039;&#039;. Cambridge University Press, 1995.&lt;br /&gt;
|-&lt;br /&gt;
|[[File:MR-randomized-algorithms.png‎|border|100px]]||&lt;br /&gt;
* Michael Mitzenmacher and Eli Upfal. &#039;&#039;&#039;&#039;&#039;Probability and Computing: Randomized Algorithms and Probabilistic Analysis&#039;&#039;&#039;&#039;&#039;. Cambridge University Press, 2005.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
= References and further readings =&lt;br /&gt;
* Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein. &#039;&#039;Introduction to Algorithms&#039;&#039;, 2nd edition. MIT Press, 2001.&lt;br /&gt;
&lt;br /&gt;
* William Feller, &#039;&#039;An Introduction to Probability Theory and Its Applications&#039;&#039;, volumes 1, 3rd edition. Wiley, 1968.&lt;br /&gt;
&lt;br /&gt;
* Noga Alon and Joel Spencer. &#039;&#039;The Probabilistic Method&#039;&#039;, 3nd edition. Wiley, 2008.&lt;br /&gt;
&lt;br /&gt;
* Olle Häggström, Finite Markov Chains and Algorithmic Applications. Cambridge University Press, 2002.&lt;br /&gt;
&lt;br /&gt;
* Alistair Sinclair, &amp;quot;Markov Chain Monte Carlo: Foundations and Applications&amp;quot;. Lecture Notes: http://www.cs.berkeley.edu/~sinclair/cs294/f09.html&lt;br /&gt;
&lt;br /&gt;
* Shlomo Hoory, Nathan Linial, and Avi Wigderson. &#039;&#039;Expander Graphs and Their Applications&#039;&#039;. American Mathematical Society, 2006. &lt;br /&gt;
&lt;br /&gt;
* Salil Vadhan, &amp;quot;Pseudorandomness&amp;quot;. draft. http://people.seas.harvard.edu/~salil/pseudorandomness/&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4465</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4465"/>
		<updated>2011-07-19T16:19:28Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox&lt;br /&gt;
|name         = Infobox&lt;br /&gt;
|bodystyle    = &lt;br /&gt;
|title        = 随机算法 &lt;br /&gt;
Randomized Algorithms&lt;br /&gt;
|titlestyle   = &lt;br /&gt;
&lt;br /&gt;
|image        = [[File:MR-randomized-algorithms.png|100px]]&lt;br /&gt;
|imagestyle   = &lt;br /&gt;
|caption      = &#039;&#039;Randomized Algorithms&#039;&#039; by Motwani and Raghavan&lt;br /&gt;
|captionstyle = &lt;br /&gt;
|headerstyle  = background:#ccf;&lt;br /&gt;
|labelstyle   = background:#ddf;&lt;br /&gt;
|datastyle    = &lt;br /&gt;
&lt;br /&gt;
|header1 =Instructor&lt;br /&gt;
|label1  = &lt;br /&gt;
|data1   = &lt;br /&gt;
|header2 = &lt;br /&gt;
|label2  = &lt;br /&gt;
|data2   = 尹一通&lt;br /&gt;
|header3 = &lt;br /&gt;
|label3  = Email&lt;br /&gt;
|data3   = yitong.yin@gmail.com  yinyt@nju.edu.cn  yinyt@lamda.nju.edu.cn&lt;br /&gt;
|header4 =&lt;br /&gt;
|label4= office&lt;br /&gt;
|data4= 蒙民伟楼 406&lt;br /&gt;
|header5 = Class&lt;br /&gt;
|label5  = &lt;br /&gt;
|data5   = &lt;br /&gt;
|header6 =&lt;br /&gt;
|label6  = Class meetings&lt;br /&gt;
|data6   = TBA &amp;lt;br&amp;gt; TBA&lt;br /&gt;
|header7 =&lt;br /&gt;
|label7  = Place&lt;br /&gt;
|data7   = &lt;br /&gt;
|header8 =&lt;br /&gt;
|label8  = Office hours&lt;br /&gt;
|data8   = TBA&lt;br /&gt;
|header9 = Textbook&lt;br /&gt;
|label9  = &lt;br /&gt;
|data9   = &lt;br /&gt;
|header10 =&lt;br /&gt;
|label10  = &lt;br /&gt;
|data10   = Motwani and Raghavan, &#039;&#039;Randomized Algorithms&#039;&#039;. Cambridge Univ Press, 1995.&lt;br /&gt;
&lt;br /&gt;
|belowstyle = background:#ddf;&lt;br /&gt;
|below = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
This is the page for the class &#039;&#039;Randomized Algorithms&#039;&#039; for the Fall 2011 semester. Students who take this class should check this page periodically for content updates and new announcements. &lt;br /&gt;
&lt;br /&gt;
= Announcement = &lt;br /&gt;
There is no announcement yet.&lt;br /&gt;
&lt;br /&gt;
= Course info =&lt;br /&gt;
* &#039;&#039;&#039;Instructor &#039;&#039;&#039;: 尹一通，&lt;br /&gt;
:*email: yitong.yin@gmail.com, yinyt@nju.edu.cn, yinyt@lamda.nju.edu.cn &lt;br /&gt;
:*office: MMW 406.&lt;br /&gt;
* &#039;&#039;&#039;Class meeting&#039;&#039;&#039;: TBA.&lt;br /&gt;
* &#039;&#039;&#039;Office hour&#039;&#039;&#039;: TBA.&lt;br /&gt;
&lt;br /&gt;
= Syllabus =&lt;br /&gt;
随机化（randomization）是现代计算机科学最重要的方法之一，近二十年来被广泛的应用于计算机科学的各个领域。在这些应用的背后，是一些共通的随机化原理。在随机算法这门课程中，我们将用数学的语言描述这些原理，将会介绍以下内容：&lt;br /&gt;
* 一些重要的随机算法的设计思想和理论分析；&lt;br /&gt;
* 概率论工具及其在算法分析中的应用，包括常用的概率不等式，以及数学证明的概率方法 (the probabilistic method);&lt;br /&gt;
* 随机算法的概率模型，包括典型的随机算法模型，以及概率复杂度模型。&lt;br /&gt;
作为一门理论课程，这门课的内容偏重数学上的分析和证明。这么做的目的不单纯是为了追求严格性，而是因为用更聪明的方法去解决问题往往需要具备有一定深度的数学思维和数学洞察力。&lt;br /&gt;
&lt;br /&gt;
=== 先修课程 Prerequisites ===&lt;br /&gt;
* 必须：离散数学，概率论。&lt;br /&gt;
* 推荐：算法设计与分析。&lt;br /&gt;
&lt;br /&gt;
=== Course materials ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Course materials|教材和参考书清单]]&lt;br /&gt;
&lt;br /&gt;
=== Policies ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Policies|Policies]]&lt;br /&gt;
&lt;br /&gt;
= Assignments =&lt;br /&gt;
There is no assignments yet.&lt;br /&gt;
&lt;br /&gt;
= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color-coding|Derandomization: Color-coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
= The Probability Theory Toolkit =&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of expectation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Independence_(probability_theory)#Independent_events Independent events] and [http://en.wikipedia.org/wiki/Conditional_independence conditional independence] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Conditional_probability Conditional probability] and [http://en.wikipedia.org/wiki/Conditional_expectation conditional expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Law_of_total_probability law of total probability] and the [http://en.wikipedia.org/wiki/Law_of_total_expectation law of total expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Boole&#039;s_inequality union bound] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Bernoulli_trial Bernoulli trials] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Geometric_distribution Geometric distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Binomial_distribution Binomial distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov&#039;s_inequality Markov&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chebyshev&#039;s_inequality Chebyshev&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chernoff_bound Chernoff bound]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Pairwise_independence k-wise independence]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Martingale_(probability_theory) Martingale]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Azuma&#039;s_inequality Azuma&#039;s inequality] and [http://en.wikipedia.org/wiki/Hoeffding&#039;s_inequality Hoeffding&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Doob_martingale Doob martingale] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Probabilistic_method  probabilistic method] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma  Lovász local lemma]  and the [http://en.wikipedia.org/wiki/Algorithmic_Lov%C3%A1sz_local_lemma algorithmic Lovász local lemma] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov_chain Markov chain]: &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain#Reducibility reducibility], [http://en.wikipedia.org/wiki/Markov_chain#Periodicity Periodicity], [http://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions stationary distribution], [http://en.wikipedia.org/wiki/Hitting_time hitting time], cover time; &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain_mixing_time mixing time], [http://en.wikipedia.org/wiki/Conductance_(probability) conductance]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4464</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4464"/>
		<updated>2011-07-19T16:19:08Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox&lt;br /&gt;
|name         = Infobox&lt;br /&gt;
|bodystyle    = &lt;br /&gt;
|title        = 随机算法 &lt;br /&gt;
Randomized Algorithms&lt;br /&gt;
|titlestyle   = &lt;br /&gt;
&lt;br /&gt;
|image        = [[File:MR-randomized-algorithms.png|100px]]&lt;br /&gt;
|imagestyle   = &lt;br /&gt;
|caption      = &#039;&#039;Randomized Algorithms&#039;&#039; by Motwani and Raghavan&lt;br /&gt;
|captionstyle = &lt;br /&gt;
|headerstyle  = background:#ccf;&lt;br /&gt;
|labelstyle   = background:#ddf;&lt;br /&gt;
|datastyle    = &lt;br /&gt;
&lt;br /&gt;
|header1 =Instructor&lt;br /&gt;
|label1  = &lt;br /&gt;
|data1   = &lt;br /&gt;
|header2 = &lt;br /&gt;
|label2  = &lt;br /&gt;
|data2   = 尹一通&lt;br /&gt;
|header3 = &lt;br /&gt;
|label3  = Email&lt;br /&gt;
|data3   = yitong.yin@gmail.com  yinyt@nju.edu.cn  yinyt@lamda.nju.edu.cn&lt;br /&gt;
|header4 =&lt;br /&gt;
|label4= office&lt;br /&gt;
|data4= 蒙民伟楼 406&lt;br /&gt;
|header5 = Class&lt;br /&gt;
|label5  = &lt;br /&gt;
|data5   = &lt;br /&gt;
|header6 =&lt;br /&gt;
|label6  = Class meetings&lt;br /&gt;
|data6   = TBA &amp;lt;br&amp;gt; TBA&lt;br /&gt;
|header7 =&lt;br /&gt;
|label7  = Place&lt;br /&gt;
|data7   = &lt;br /&gt;
|header8 =&lt;br /&gt;
|label8  = Office hours&lt;br /&gt;
|data8   = TBA&lt;br /&gt;
|header9 = Textbook&lt;br /&gt;
|label9  = &lt;br /&gt;
|data9   = &lt;br /&gt;
|header10 =&lt;br /&gt;
|label10  = &lt;br /&gt;
|data10   = Motwani and Raghavan, &#039;&#039;Randomized Algorithms&#039;&#039;. Cambridge Univ Press, 1995.&lt;br /&gt;
&lt;br /&gt;
|belowstyle = background:#ddf;&lt;br /&gt;
|below = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
This is the page for the class &#039;&#039;Randomized Algorithms&#039;&#039; for the Fall 2011 semester. Students who take this class should check this page periodically for content updates and new announcements. &lt;br /&gt;
&lt;br /&gt;
There is also a backup page for off-campus users. The URL is http://lamda.nju.edu.cn/yinyt/random2011wiki/&lt;br /&gt;
&lt;br /&gt;
= Announcement = &lt;br /&gt;
There is no announcement yet.&lt;br /&gt;
&lt;br /&gt;
= Course info =&lt;br /&gt;
* &#039;&#039;&#039;Instructor &#039;&#039;&#039;: 尹一通，&lt;br /&gt;
:*email: yitong.yin@gmail.com, yinyt@nju.edu.cn, yinyt@lamda.nju.edu.cn &lt;br /&gt;
:*office: MMW 406.&lt;br /&gt;
* &#039;&#039;&#039;Class meeting&#039;&#039;&#039;: TBA.&lt;br /&gt;
* &#039;&#039;&#039;Office hour&#039;&#039;&#039;: TBA.&lt;br /&gt;
&lt;br /&gt;
= Syllabus =&lt;br /&gt;
随机化（randomization）是现代计算机科学最重要的方法之一，近二十年来被广泛的应用于计算机科学的各个领域。在这些应用的背后，是一些共通的随机化原理。在随机算法这门课程中，我们将用数学的语言描述这些原理，将会介绍以下内容：&lt;br /&gt;
* 一些重要的随机算法的设计思想和理论分析；&lt;br /&gt;
* 概率论工具及其在算法分析中的应用，包括常用的概率不等式，以及数学证明的概率方法 (the probabilistic method);&lt;br /&gt;
* 随机算法的概率模型，包括典型的随机算法模型，以及概率复杂度模型。&lt;br /&gt;
作为一门理论课程，这门课的内容偏重数学上的分析和证明。这么做的目的不单纯是为了追求严格性，而是因为用更聪明的方法去解决问题往往需要具备有一定深度的数学思维和数学洞察力。&lt;br /&gt;
&lt;br /&gt;
=== 先修课程 Prerequisites ===&lt;br /&gt;
* 必须：离散数学，概率论。&lt;br /&gt;
* 推荐：算法设计与分析。&lt;br /&gt;
&lt;br /&gt;
=== Course materials ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Course materials|教材和参考书清单]]&lt;br /&gt;
&lt;br /&gt;
=== Policies ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Policies|Policies]]&lt;br /&gt;
&lt;br /&gt;
= Assignments =&lt;br /&gt;
There is no assignments yet.&lt;br /&gt;
&lt;br /&gt;
= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color-coding|Derandomization: Color-coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
= The Probability Theory Toolkit =&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of expectation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Independence_(probability_theory)#Independent_events Independent events] and [http://en.wikipedia.org/wiki/Conditional_independence conditional independence] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Conditional_probability Conditional probability] and [http://en.wikipedia.org/wiki/Conditional_expectation conditional expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Law_of_total_probability law of total probability] and the [http://en.wikipedia.org/wiki/Law_of_total_expectation law of total expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Boole&#039;s_inequality union bound] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Bernoulli_trial Bernoulli trials] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Geometric_distribution Geometric distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Binomial_distribution Binomial distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov&#039;s_inequality Markov&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chebyshev&#039;s_inequality Chebyshev&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chernoff_bound Chernoff bound]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Pairwise_independence k-wise independence]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Martingale_(probability_theory) Martingale]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Azuma&#039;s_inequality Azuma&#039;s inequality] and [http://en.wikipedia.org/wiki/Hoeffding&#039;s_inequality Hoeffding&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Doob_martingale Doob martingale] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Probabilistic_method  probabilistic method] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma  Lovász local lemma]  and the [http://en.wikipedia.org/wiki/Algorithmic_Lov%C3%A1sz_local_lemma algorithmic Lovász local lemma] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov_chain Markov chain]: &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain#Reducibility reducibility], [http://en.wikipedia.org/wiki/Markov_chain#Periodicity Periodicity], [http://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions stationary distribution], [http://en.wikipedia.org/wiki/Hitting_time hitting time], cover time; &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain_mixing_time mixing time], [http://en.wikipedia.org/wiki/Conductance_(probability) conductance]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4463</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4463"/>
		<updated>2011-07-19T16:18:10Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox&lt;br /&gt;
|name         = Infobox&lt;br /&gt;
|bodystyle    = &lt;br /&gt;
|title        = 随机算法 &lt;br /&gt;
Randomized Algorithms&lt;br /&gt;
|titlestyle   = &lt;br /&gt;
&lt;br /&gt;
|image        = [[File:MR-randomized-algorithms.png|100px]]&lt;br /&gt;
|imagestyle   = &lt;br /&gt;
|caption      = &#039;&#039;Randomized Algorithms&#039;&#039; by Motwani and Raghavan&lt;br /&gt;
|captionstyle = &lt;br /&gt;
|headerstyle  = background:#ccf;&lt;br /&gt;
|labelstyle   = background:#ddf;&lt;br /&gt;
|datastyle    = &lt;br /&gt;
&lt;br /&gt;
|header1 =Instructor&lt;br /&gt;
|label1  = &lt;br /&gt;
|data1   = &lt;br /&gt;
|header2 = &lt;br /&gt;
|label2  = &lt;br /&gt;
|data2   = 尹一通&lt;br /&gt;
|header3 = &lt;br /&gt;
|label3  = Email&lt;br /&gt;
|data3   = yitong.yin@gmail.com  yinyt@nju.edu.cn  yinyt@lamda.nju.edu.cn&lt;br /&gt;
|header4 =&lt;br /&gt;
|label4= office&lt;br /&gt;
|data4= 蒙民伟楼 406&lt;br /&gt;
|header5 = Class&lt;br /&gt;
|label5  = &lt;br /&gt;
|data5   = &lt;br /&gt;
|header6 =&lt;br /&gt;
|label6  = Class meetings&lt;br /&gt;
|data6   = TBA &amp;lt;br&amp;gt; TBA&lt;br /&gt;
|header7 =&lt;br /&gt;
|label7  = Place&lt;br /&gt;
|data7   = &lt;br /&gt;
|header8 =&lt;br /&gt;
|label8  = Office hours&lt;br /&gt;
|data8   = TBA&lt;br /&gt;
|header9 = Textbook&lt;br /&gt;
|label9  = &lt;br /&gt;
|data9   = &lt;br /&gt;
|header10 =&lt;br /&gt;
|label10  = &lt;br /&gt;
|data10   = Motwani and Raghavan, &#039;&#039;Randomized Algorithms&#039;&#039;. Cambridge Univ Press, 1995.&lt;br /&gt;
&lt;br /&gt;
|belowstyle = background:#ddf;&lt;br /&gt;
|below = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
This is the page for the class &#039;&#039;Randomized Algorithms&#039;&#039; for the Spring 2010 semester. Students who take this class should check this page periodically for content updates and new announcements. &lt;br /&gt;
&lt;br /&gt;
There is also a backup page for off-campus users. The URL is http://lamda.nju.edu.cn/yinyt/random2010wiki/&lt;br /&gt;
&lt;br /&gt;
= Announcement = &lt;br /&gt;
There is no announcement yet.&lt;br /&gt;
&lt;br /&gt;
= Course info =&lt;br /&gt;
* &#039;&#039;&#039;Instructor &#039;&#039;&#039;: 尹一通，&lt;br /&gt;
:*email: yitong.yin@gmail.com, yinyt@nju.edu.cn, yinyt@lamda.nju.edu.cn &lt;br /&gt;
:*office: MMW 406.&lt;br /&gt;
* &#039;&#039;&#039;Class meeting&#039;&#039;&#039;: TBA.&lt;br /&gt;
* &#039;&#039;&#039;Office hour&#039;&#039;&#039;: TBA.&lt;br /&gt;
&lt;br /&gt;
= Syllabus =&lt;br /&gt;
随机化（randomization）是现代计算机科学最重要的方法之一，近二十年来被广泛的应用于计算机科学的各个领域。在这些应用的背后，是一些共通的随机化原理。在随机算法这门课程中，我们将用数学的语言描述这些原理，将会介绍以下内容：&lt;br /&gt;
* 一些重要的随机算法的设计思想和理论分析；&lt;br /&gt;
* 概率论工具及其在算法分析中的应用，包括常用的概率不等式，以及数学证明的概率方法 (the probabilistic method);&lt;br /&gt;
* 随机算法的概率模型，包括典型的随机算法模型，以及概率复杂度模型。&lt;br /&gt;
作为一门理论课程，这门课的内容偏重数学上的分析和证明。这么做的目的不单纯是为了追求严格性，而是因为用更聪明的方法去解决问题往往需要具备有一定深度的数学思维和数学洞察力。&lt;br /&gt;
&lt;br /&gt;
=== 先修课程 Prerequisites ===&lt;br /&gt;
* 必须：离散数学，概率论。&lt;br /&gt;
* 推荐：算法设计与分析。&lt;br /&gt;
&lt;br /&gt;
=== Course materials ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Course materials|教材和参考书清单]]&lt;br /&gt;
&lt;br /&gt;
=== Policies ===&lt;br /&gt;
* [[随机算法 (Fall 2011)/Policies|Policies]]&lt;br /&gt;
&lt;br /&gt;
= Assignments =&lt;br /&gt;
There is no assignments yet.&lt;br /&gt;
&lt;br /&gt;
= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color-coding|Derandomization: Color-coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
= The Probability Theory Toolkit =&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of expectation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Independence_(probability_theory)#Independent_events Independent events] and [http://en.wikipedia.org/wiki/Conditional_independence conditional independence] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Conditional_probability Conditional probability] and [http://en.wikipedia.org/wiki/Conditional_expectation conditional expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Law_of_total_probability law of total probability] and the [http://en.wikipedia.org/wiki/Law_of_total_expectation law of total expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Boole&#039;s_inequality union bound] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Bernoulli_trial Bernoulli trials] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Geometric_distribution Geometric distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Binomial_distribution Binomial distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov&#039;s_inequality Markov&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chebyshev&#039;s_inequality Chebyshev&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chernoff_bound Chernoff bound]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Pairwise_independence k-wise independence]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Martingale_(probability_theory) Martingale]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Azuma&#039;s_inequality Azuma&#039;s inequality] and [http://en.wikipedia.org/wiki/Hoeffding&#039;s_inequality Hoeffding&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Doob_martingale Doob martingale] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Probabilistic_method  probabilistic method] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma  Lovász local lemma]  and the [http://en.wikipedia.org/wiki/Algorithmic_Lov%C3%A1sz_local_lemma algorithmic Lovász local lemma] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov_chain Markov chain]: &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain#Reducibility reducibility], [http://en.wikipedia.org/wiki/Markov_chain#Periodicity Periodicity], [http://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions stationary distribution], [http://en.wikipedia.org/wiki/Hitting_time hitting time], cover time; &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain_mixing_time mixing time], [http://en.wikipedia.org/wiki/Conductance_(probability) conductance]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4462</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4462"/>
		<updated>2011-07-19T16:15:56Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox&lt;br /&gt;
|name         = Infobox&lt;br /&gt;
|bodystyle    = &lt;br /&gt;
|title        = 随机算法 &lt;br /&gt;
Randomized Algorithms&lt;br /&gt;
|titlestyle   = &lt;br /&gt;
&lt;br /&gt;
|image        = [[File:MR-randomized-algorithms.png|100px]]&lt;br /&gt;
|imagestyle   = &lt;br /&gt;
|caption      = &#039;&#039;Randomized Algorithms&#039;&#039; by Motwani and Raghavan&lt;br /&gt;
|captionstyle = &lt;br /&gt;
|headerstyle  = background:#ccf;&lt;br /&gt;
|labelstyle   = background:#ddf;&lt;br /&gt;
|datastyle    = &lt;br /&gt;
&lt;br /&gt;
|header1 =Instructor&lt;br /&gt;
|label1  = &lt;br /&gt;
|data1   = &lt;br /&gt;
|header2 = &lt;br /&gt;
|label2  = &lt;br /&gt;
|data2   = 尹一通&lt;br /&gt;
|header3 = &lt;br /&gt;
|label3  = Email&lt;br /&gt;
|data3   = yitong.yin@gmail.com  yinyt@nju.edu.cn  yinyt@lamda.nju.edu.cn&lt;br /&gt;
|header4 =&lt;br /&gt;
|label4= office&lt;br /&gt;
|data4= 蒙民伟楼 406&lt;br /&gt;
|header5 = Class&lt;br /&gt;
|label5  = &lt;br /&gt;
|data5   = &lt;br /&gt;
|header6 =&lt;br /&gt;
|label6  = Class meetings&lt;br /&gt;
|data6   = 10 am-12 am, Tuesday, &amp;lt;br&amp;gt;馆III-101&lt;br /&gt;
|header7 =&lt;br /&gt;
|label7  = Place&lt;br /&gt;
|data7   = &lt;br /&gt;
|header8 =&lt;br /&gt;
|label8  = Office hours&lt;br /&gt;
|data8   = 2pm-5pm, Saturday, MMW 406 &lt;br /&gt;
|header9 = Textbook&lt;br /&gt;
|label9  = &lt;br /&gt;
|data9   = &lt;br /&gt;
|header10 =&lt;br /&gt;
|label10  = &lt;br /&gt;
|data10   = Motwani and Raghavan, &#039;&#039;Randomized Algorithms&#039;&#039;. Cambridge Univ Press, 1995.&lt;br /&gt;
&lt;br /&gt;
|belowstyle = background:#ddf;&lt;br /&gt;
|below = &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
This is the page for the class &#039;&#039;Randomized Algorithms&#039;&#039; for the Spring 2010 semester. Students who take this class should check this page periodically for content updates and new announcements. &lt;br /&gt;
&lt;br /&gt;
There is also a backup page for off-campus users. The URL is http://lamda.nju.edu.cn/yinyt/random2010wiki/&lt;br /&gt;
&lt;br /&gt;
= Announcement = &lt;br /&gt;
* (07/01/2010) 期末考试：时间7月6日，上午9点至11点；地点：教202。可携带一页A4打印纸（可双面）的笔记。&lt;br /&gt;
* (06/26/2010) 由于要统计考试人数，请参加期末考试的同学给我发email，把名字和学号告诉我。请看到通知的同学互相告知一下。&lt;br /&gt;
* (06/24/2010) 第四次作业答案公布。第三次作业答案已经补完。Sorry for the delay!&lt;br /&gt;
* (06/11/2010) 由于端午节轮休，6月15日的课改在13日星期日。&lt;br /&gt;
* (06/01/2010) The fifth homework is out. Due on Tuesday June 15, in class.&lt;br /&gt;
* (05/26/2010) 第十三课中的Estimator Theorem中有一处笔误，&amp;lt;math&amp;gt;\epsilon&amp;lt;/math&amp;gt;应为&amp;lt;math&amp;gt;\epsilon^2&amp;lt;/math&amp;gt;。感谢钱超同学发现这个问题。&lt;br /&gt;
::[[Randomized_Algorithms_(Spring_2010)/class announcements|(&#039;&#039;older announcements...&#039;&#039;)]]&lt;br /&gt;
&lt;br /&gt;
= Course info =&lt;br /&gt;
* &#039;&#039;&#039;Instructor &#039;&#039;&#039;: 尹一通，&lt;br /&gt;
:*email: yitong.yin@gmail.com, yinyt@nju.edu.cn, yinyt@lamda.nju.edu.cn &lt;br /&gt;
:*office: MMW 406.&lt;br /&gt;
* &#039;&#039;&#039;Class meeting&#039;&#039;&#039;: 10am-12 am, Tue; 馆III-101.&lt;br /&gt;
* &#039;&#039;&#039;Office hour&#039;&#039;&#039;: 2-5pm, Sat; MMW 406.&lt;br /&gt;
&lt;br /&gt;
= Syllabus =&lt;br /&gt;
随机化（randomization）是现代计算机科学最重要的方法之一，近二十年来被广泛的应用于计算机科学的各个领域。在这些应用的背后，是一些共通的随机化原理。在随机算法这门课程中，我们将用数学的语言描述这些原理，将会介绍以下内容：&lt;br /&gt;
* 一些重要的随机算法的设计思想和理论分析；&lt;br /&gt;
* 概率论工具及其在算法分析中的应用，包括常用的概率不等式，以及数学证明的概率方法 (the probabilistic method);&lt;br /&gt;
* 随机算法的概率模型，包括典型的随机算法模型，以及概率复杂度模型。&lt;br /&gt;
作为一门理论课程，这门课的内容偏重数学上的分析和证明。这么做的目的不单纯是为了追求严格性，而是因为用更聪明的方法去解决问题往往需要具备有一定深度的数学思维和数学洞察力。&lt;br /&gt;
&lt;br /&gt;
=== 先修课程 Prerequisites ===&lt;br /&gt;
* 必须：离散数学，概率论。&lt;br /&gt;
* 推荐：算法设计与分析。&lt;br /&gt;
&lt;br /&gt;
=== Course materials ===&lt;br /&gt;
* [[Randomized Algorithms (Spring 2010)/Course materials|教材和参考书清单]]&lt;br /&gt;
&lt;br /&gt;
=== Policies ===&lt;br /&gt;
* [[Randomized Algorithms (Spring 2010)/Policies|Policies]]&lt;br /&gt;
&lt;br /&gt;
= Assignments =&lt;br /&gt;
* (06/29/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 6 | Problem Set 6]] makeup assignment, hand in before the final exam.&lt;br /&gt;
* (06/01/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 5 | Problem Set 5]] due on June 15 , Tuesday, in class. 中英文不限。&lt;br /&gt;
* (05/18/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 4 | Problem Set 4]] due on June 1 , Tuesday, in class. 中英文不限。&lt;br /&gt;
* (04/20/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 3 | Problem Set 3]] due on May 4, Tuesday, in class. 中英文不限。&lt;br /&gt;
* (03/30/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 2 | Problem Set 2]] due on April 13, Tuesday, in class. 中英文不限。&lt;br /&gt;
* (03/16/2010) [[Randomized Algorithms (Spring 2010)/Problem Set 1 | Problem Set 1]] due on March 30, Tuesday, in class. 中英文不限。&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color-coding|Derandomization: Color-coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
= The Probability Theory Toolkit =&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of expectation]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Independence_(probability_theory)#Independent_events Independent events] and [http://en.wikipedia.org/wiki/Conditional_independence conditional independence] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Conditional_probability Conditional probability] and [http://en.wikipedia.org/wiki/Conditional_expectation conditional expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Law_of_total_probability law of total probability] and the [http://en.wikipedia.org/wiki/Law_of_total_expectation law of total expectation] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Boole&#039;s_inequality union bound] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Bernoulli_trial Bernoulli trials] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Geometric_distribution Geometric distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Binomial_distribution Binomial distribution]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov&#039;s_inequality Markov&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chebyshev&#039;s_inequality Chebyshev&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Chernoff_bound Chernoff bound]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Pairwise_independence k-wise independence]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Martingale_(probability_theory) Martingale]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Azuma&#039;s_inequality Azuma&#039;s inequality] and [http://en.wikipedia.org/wiki/Hoeffding&#039;s_inequality Hoeffding&#039;s inequality]&lt;br /&gt;
* [http://en.wikipedia.org/wiki/Doob_martingale Doob martingale] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Probabilistic_method  probabilistic method] &lt;br /&gt;
* The [http://en.wikipedia.org/wiki/Lov%C3%A1sz_local_lemma  Lovász local lemma]  and the [http://en.wikipedia.org/wiki/Algorithmic_Lov%C3%A1sz_local_lemma algorithmic Lovász local lemma] &lt;br /&gt;
* [http://en.wikipedia.org/wiki/Markov_chain Markov chain]: &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain#Reducibility reducibility], [http://en.wikipedia.org/wiki/Markov_chain#Periodicity Periodicity], [http://en.wikipedia.org/wiki/Markov_chain#Steady-state_analysis_and_limiting_distributions stationary distribution], [http://en.wikipedia.org/wiki/Hitting_time hitting time], cover time; &lt;br /&gt;
::[http://en.wikipedia.org/wiki/Markov_chain_mixing_time mixing time], [http://en.wikipedia.org/wiki/Conductance_(probability) conductance]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4461</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4461"/>
		<updated>2011-07-19T16:08:51Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color-coding|Derandomization: Color-coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4460</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4460"/>
		<updated>2011-07-19T16:04:59Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# Introduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Algorithms: an Introduction|Randomized Algorithms: an Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color Coding|Derandomization: Color Coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4459</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4459"/>
		<updated>2011-07-19T16:03:41Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Color Coding|Derandomization: Color Coding]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/The_Spectral_Gap&amp;diff=4622</id>
		<title>随机算法 (Fall 2011)/The Spectral Gap</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/The_Spectral_Gap&amp;diff=4622"/>
		<updated>2011-07-19T15:55:34Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;It turns out that the second largest eigenvalue of a graph contains important information about the graph&amp;#039;s expansion parameter. The following theorem is the so-called Cheeger&amp;#039;s …&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;It turns out that the second largest eigenvalue of a graph contains important information about the graph&#039;s expansion parameter. The following theorem is the so-called Cheeger&#039;s inequality.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Cheeger&#039;s inequality)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph with spectrum &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\frac{d-\lambda_2}{2}\le \phi(G) \le \sqrt{2d(d-\lambda_2)}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
The theorem was first stated for Riemannian manifolds, and was proved by Cheeger and Buser (for different directions of the inequalities). The discrete case is proved independently by Dodziuk and Alon-Milman.&lt;br /&gt;
&lt;br /&gt;
For a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph, the value &amp;lt;math&amp;gt;(d-\lambda_2)&amp;lt;/math&amp;gt; is known as the &#039;&#039;&#039;spectral gap&#039;&#039;&#039;. The name is due to the fact that it is the gap between the first and the second eigenvalue in the spectrum of a graph. The spectral gap provides an estimate on the expansion ratio of a graph. More precisely, a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph has large expansion ratio (thus being an expander) if the spectral gap is large.&lt;br /&gt;
&lt;br /&gt;
If we write &amp;lt;math&amp;gt;\alpha=1-\frac{\lambda_2}{d}&amp;lt;/math&amp;gt; (sometimes it is called the normalized spectral gap), the Cheeger&#039;s inequality is turned into a nicer form:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\frac{\alpha}{2}\le \frac{\phi}{d}\le\sqrt{2\alpha} &amp;lt;/math&amp;gt; or equivalently &amp;lt;math&amp;gt;&lt;br /&gt;
\frac{1}{2}\left(\frac{\phi}{d}\right)^2\le \alpha\le 2\left(\frac{\phi}{d}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
We will not prove the theorem, but we will explain briefly why it works.&lt;br /&gt;
&lt;br /&gt;
For the spectra of graphs, the Cheeger&#039;s inequality is proved by the [http://en.wikipedia.org/wiki/Min-max_theorem Courant-Fischer theorem] in linear algebra. The Courant-Fischer theorem is a fundamental theorem in linear algebra which characterizes the eigenvalues by a series of optimizations:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Courant-Fischer theorem)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; be a symmetric matrix with eigenvalues &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\lambda_k&lt;br /&gt;
&amp;amp;=\max_{v_1,v_2,\ldots,v_{n-k}\in \mathbb{R}^n}\min_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{n-k}}}\frac{x^TAx}{x^Tx}\\&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\min_{v_1,v_2,\ldots,v_{k-1}\in \mathbb{R}^n}\max_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{k-1}}}\frac{x^TAx}{x^Tx}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
For a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph with adjacency matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and spectrum &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, its largest eigenvalue &amp;lt;math&amp;gt;\lambda_1=d&amp;lt;/math&amp;gt; with eigenvector &amp;lt;math&amp;gt;A\cdot\mathbf{1}=d\mathbf{1}&amp;lt;/math&amp;gt;. According to the Courant-Fischer theorem, the second largest eigenvalue can be computed as&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_2=\max_{x\bot \mathbf{1}}\frac{x^TAx}{x^Tx},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
d-\lambda_2=\min_{x\bot \mathbf{1}}\frac{x^T(dI-A)x}{x^Tx}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The later is an optimization, which shares some resemblance of the expansion ratio &amp;lt;math&amp;gt;\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}}\frac{|\partial S|}{|S|}=\min_{\chi_S}\frac{\chi_S^T(dI-A)\chi_S}{\chi_S^T\chi_S}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\chi_S&amp;lt;/math&amp;gt; is the &#039;&#039;&#039;characteristic vector&#039;&#039;&#039; of the set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, defined as &amp;lt;math&amp;gt;\chi_S(i)=1&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i\in S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi_S(i)=0&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i\not\in S&amp;lt;/math&amp;gt;. It is not hard to verify that &amp;lt;math&amp;gt;\chi_S^T\chi_S=\sum_{i}\chi_S(i)=|S|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi_S^T(dI-A)\chi_S=\sum_{i\sim j}(\chi_S(i)-\chi_S(j))^2=|\partial S|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, the spectral gap &amp;lt;math&amp;gt;d-\lambda_2&amp;lt;/math&amp;gt; and the expansion ratio &amp;lt;math&amp;gt;\phi(G)&amp;lt;/math&amp;gt; both involve some optimizations with the similar forms. It explains why they can be used to approximate each other.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Spectrum&amp;diff=4613</id>
		<title>随机算法 (Fall 2011)/Graph Spectrum</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Spectrum&amp;diff=4613"/>
		<updated>2011-07-19T15:54:56Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* The spectral gap */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The &#039;&#039;&#039;adjacency matrix&#039;&#039;&#039; of an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-vertex graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;A = A(G)&amp;lt;/math&amp;gt;, is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix where &amp;lt;math&amp;gt;A(u,v)&amp;lt;/math&amp;gt; is the number of edges in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; between vertex &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and vertex &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. Because &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a symmetric matrix with real entries, due to the [http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem Perron-Frobenius theorem], it has real eigenvalues &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, which associate with an orthonormal system of eigenvectors &amp;lt;math&amp;gt;v_1,v_2,\ldots, v_n\,&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;Av_i=\lambda_i v_i\,&amp;lt;/math&amp;gt;. We call the eigenvalues of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; the &#039;&#039;&#039;spectrum&#039;&#039;&#039; of the graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The spectrum of a graph contains a lot of information about the graph. For example, supposed that &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular, the following lemma holds.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma|&lt;br /&gt;
# &amp;lt;math&amp;gt;|\lambda_i|\le d&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;1\le i\le n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\lambda_1=d&amp;lt;/math&amp;gt; and the corresponding eigenvector is &amp;lt;math&amp;gt;(\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}},\ldots,\frac{1}{\sqrt{n}})&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected if and only if &amp;lt;math&amp;gt;\lambda_1&amp;gt;\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is bipartite then &amp;lt;math&amp;gt;\lambda_1=-\lambda_n&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; be the adjacency matrix of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, with entries &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;. It is obvious that &amp;lt;math&amp;gt;\sum_{j}a_{ij}=d\,&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;.&lt;br /&gt;
*(1) Suppose that &amp;lt;math&amp;gt;Ax=\lambda x, x\neq \mathbf{0}&amp;lt;/math&amp;gt;, and let &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; be an entry of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; with the largest absolute value. Since &amp;lt;math&amp;gt;(Ax)_i=\lambda x_i&amp;lt;/math&amp;gt;, we have &lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{j}a_{ij}x_j=\lambda x_i,\,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:and so&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
|\lambda||x_i|=\left|\sum_{j}a_{ij}x_j\right|\le \sum_{j}a_{ij}|x_j|\le \sum_{j}a_{ij}|x_i| \le d|x_i|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:Thus &amp;lt;math&amp;gt;|\lambda|\le d&amp;lt;/math&amp;gt;.&lt;br /&gt;
*(2) is easy to check.&lt;br /&gt;
*(3) Let &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; be the nonzero vector for which &amp;lt;math&amp;gt;Ax=dx&amp;lt;/math&amp;gt;, and let &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; be an entry of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; with the largest absolute value. Since &amp;lt;math&amp;gt;(Ax)_i=d x_i&amp;lt;/math&amp;gt;, we have &lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{j}a_{ij}x_j=d x_i.\,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:Since &amp;lt;math&amp;gt;\sum_{j}a_{ij}=d\,&amp;lt;/math&amp;gt; and by the maximality of &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, it follows that &amp;lt;math&amp;gt;x_j=x_i&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;a_{ij}&amp;gt;0&amp;lt;/math&amp;gt;. Thus, &amp;lt;math&amp;gt;x_i=x_j&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; are adjacent, which implies that &amp;lt;math&amp;gt;x_i=x_j&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; are connected. For connected &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, all vertices are connected, thus all &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; are equal. This shows that if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected, the eigenvalue &amp;lt;math&amp;gt;d=\lambda_1&amp;lt;/math&amp;gt; has multiplicity 1, thus &amp;lt;math&amp;gt;\lambda_1&amp;gt;\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:If otherwise, &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is disconnected, then for two different components, we have &amp;lt;math&amp;gt;Ax=dx&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Ay=dy&amp;lt;/math&amp;gt;, where the entries of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; are nonzero only for the vertices in their components components. Then &amp;lt;math&amp;gt;A(\alpha x+\beta y)=d(\alpha x+\beta y)&amp;lt;/math&amp;gt;. Thus, the multiplicity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; is greater than 1, so &amp;lt;math&amp;gt;\lambda_1=\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*(4) If &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; if bipartite, then the vertex set can be partitioned into two disjoint nonempty sets &amp;lt;math&amp;gt;V_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt; such that all edges have one endpoint in each of &amp;lt;math&amp;gt;V_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt;. Algebraically, this means that the adjacency matrix can be organized into the form&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
P^TAP=\begin{bmatrix}&lt;br /&gt;
0 &amp;amp; B\\&lt;br /&gt;
B^T &amp;amp; 0&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:where &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is a permutation matrix, which has no change on the eigenvalues. &lt;br /&gt;
:If &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is an eigenvector corresponding to the eigenvalue &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;x&#039;&amp;lt;/math&amp;gt; which is obtained from &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; by changing the sign of the entries corresponding to vertices in &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt;, is an eigenvector corresponding to the eigenvalue &amp;lt;math&amp;gt;-\lambda&amp;lt;/math&amp;gt;. It follows that the spectrum of a bipartite graph is symmetric with respect to 0.&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4458</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4458"/>
		<updated>2011-07-19T15:54:27Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Spectral Gap|The Spectral Gap]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Expander_Mixing_Lemma&amp;diff=4620</id>
		<title>随机算法 (Fall 2011)/Expander Mixing Lemma</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Expander_Mixing_Lemma&amp;diff=4620"/>
		<updated>2011-07-19T15:53:03Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;Given a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices with the spectrum &amp;lt;math&amp;gt;d=\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, we denote &amp;lt;math&amp;gt;\lambd…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Given a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; on &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices with the spectrum &amp;lt;math&amp;gt;d=\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, we denote &amp;lt;math&amp;gt;\lambda_\max  = \max(|\lambda_2|,|\lambda_n|)\,&amp;lt;/math&amp;gt;, which is the largest absolute value of an eigenvalue other than &amp;lt;math&amp;gt;\lambda_1=d&amp;lt;/math&amp;gt;. Sometimes, the value of &amp;lt;math&amp;gt;(d-\lambda_\max)&amp;lt;/math&amp;gt; is also referred as the spectral gap, because it is the gap between the largest and the second largest absolute values of eigenvalues.&lt;br /&gt;
&lt;br /&gt;
The next lemma is the so-called expander mixing lemma, which states a fundamental fact about expander graphs. &lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma (expander mixing lemma)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices. Then for all &amp;lt;math&amp;gt;S, T \subseteq V&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;\left||E(S,T)|-\frac{d|S||T|}{n}\right|\le\lambda_\max\sqrt{|S||T|}&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The left-hand side measures the deviation between two quantities: one is &amp;lt;math&amp;gt;|E(S,T)|&amp;lt;/math&amp;gt;, the number of edges between the two sets &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt;; the other is the expected number of edges between &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; in a random graph of edge density &amp;lt;math&amp;gt;d/n&amp;lt;/math&amp;gt;, namely &amp;lt;math&amp;gt;d|S||T|/n&amp;lt;/math&amp;gt;. A small &amp;lt;math&amp;gt;\lambda_\max&amp;lt;/math&amp;gt; (or large spectral gap) implies that this deviation (or [http://en.wikipedia.org/wiki/Discrepancy_theory &#039;&#039;&#039;discrepancy&#039;&#039;&#039;] as it is sometimes called) is small, so the graph looks random everywhere although it is deterministic.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Spectrum&amp;diff=4612</id>
		<title>随机算法 (Fall 2011)/Graph Spectrum</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Spectrum&amp;diff=4612"/>
		<updated>2011-07-19T15:52:12Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* The expander mixing lemma */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The &#039;&#039;&#039;adjacency matrix&#039;&#039;&#039; of an &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-vertex graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;A = A(G)&amp;lt;/math&amp;gt;, is an &amp;lt;math&amp;gt;n\times n&amp;lt;/math&amp;gt; matrix where &amp;lt;math&amp;gt;A(u,v)&amp;lt;/math&amp;gt; is the number of edges in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; between vertex &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and vertex &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. Because &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; is a symmetric matrix with real entries, due to the [http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem Perron-Frobenius theorem], it has real eigenvalues &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, which associate with an orthonormal system of eigenvectors &amp;lt;math&amp;gt;v_1,v_2,\ldots, v_n\,&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;Av_i=\lambda_i v_i\,&amp;lt;/math&amp;gt;. We call the eigenvalues of &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; the &#039;&#039;&#039;spectrum&#039;&#039;&#039; of the graph &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The spectrum of a graph contains a lot of information about the graph. For example, supposed that &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular, the following lemma holds.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma|&lt;br /&gt;
# &amp;lt;math&amp;gt;|\lambda_i|\le d&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;1\le i\le n&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;\lambda_1=d&amp;lt;/math&amp;gt; and the corresponding eigenvector is &amp;lt;math&amp;gt;(\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}},\ldots,\frac{1}{\sqrt{n}})&amp;lt;/math&amp;gt;.&lt;br /&gt;
# &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected if and only if &amp;lt;math&amp;gt;\lambda_1&amp;gt;\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
# If &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is bipartite then &amp;lt;math&amp;gt;\lambda_1=-\lambda_n&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; be the adjacency matrix of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, with entries &amp;lt;math&amp;gt;a_{ij}&amp;lt;/math&amp;gt;. It is obvious that &amp;lt;math&amp;gt;\sum_{j}a_{ij}=d\,&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;.&lt;br /&gt;
*(1) Suppose that &amp;lt;math&amp;gt;Ax=\lambda x, x\neq \mathbf{0}&amp;lt;/math&amp;gt;, and let &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; be an entry of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; with the largest absolute value. Since &amp;lt;math&amp;gt;(Ax)_i=\lambda x_i&amp;lt;/math&amp;gt;, we have &lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{j}a_{ij}x_j=\lambda x_i,\,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:and so&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
|\lambda||x_i|=\left|\sum_{j}a_{ij}x_j\right|\le \sum_{j}a_{ij}|x_j|\le \sum_{j}a_{ij}|x_i| \le d|x_i|.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:Thus &amp;lt;math&amp;gt;|\lambda|\le d&amp;lt;/math&amp;gt;.&lt;br /&gt;
*(2) is easy to check.&lt;br /&gt;
*(3) Let &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; be the nonzero vector for which &amp;lt;math&amp;gt;Ax=dx&amp;lt;/math&amp;gt;, and let &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; be an entry of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; with the largest absolute value. Since &amp;lt;math&amp;gt;(Ax)_i=d x_i&amp;lt;/math&amp;gt;, we have &lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{j}a_{ij}x_j=d x_i.\,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:Since &amp;lt;math&amp;gt;\sum_{j}a_{ij}=d\,&amp;lt;/math&amp;gt; and by the maximality of &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt;, it follows that &amp;lt;math&amp;gt;x_j=x_i&amp;lt;/math&amp;gt; for all &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;a_{ij}&amp;gt;0&amp;lt;/math&amp;gt;. Thus, &amp;lt;math&amp;gt;x_i=x_j&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; are adjacent, which implies that &amp;lt;math&amp;gt;x_i=x_j&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; are connected. For connected &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, all vertices are connected, thus all &amp;lt;math&amp;gt;x_i&amp;lt;/math&amp;gt; are equal. This shows that if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected, the eigenvalue &amp;lt;math&amp;gt;d=\lambda_1&amp;lt;/math&amp;gt; has multiplicity 1, thus &amp;lt;math&amp;gt;\lambda_1&amp;gt;\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:If otherwise, &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is disconnected, then for two different components, we have &amp;lt;math&amp;gt;Ax=dx&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;Ay=dy&amp;lt;/math&amp;gt;, where the entries of &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y&amp;lt;/math&amp;gt; are nonzero only for the vertices in their components components. Then &amp;lt;math&amp;gt;A(\alpha x+\beta y)=d(\alpha x+\beta y)&amp;lt;/math&amp;gt;. Thus, the multiplicity of &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt; is greater than 1, so &amp;lt;math&amp;gt;\lambda_1=\lambda_2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
*(4) If &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; if bipartite, then the vertex set can be partitioned into two disjoint nonempty sets &amp;lt;math&amp;gt;V_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt; such that all edges have one endpoint in each of &amp;lt;math&amp;gt;V_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt;. Algebraically, this means that the adjacency matrix can be organized into the form&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
P^TAP=\begin{bmatrix}&lt;br /&gt;
0 &amp;amp; B\\&lt;br /&gt;
B^T &amp;amp; 0&lt;br /&gt;
\end{bmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:where &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is a permutation matrix, which has no change on the eigenvalues. &lt;br /&gt;
:If &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; is an eigenvector corresponding to the eigenvalue &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;x&#039;&amp;lt;/math&amp;gt; which is obtained from &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; by changing the sign of the entries corresponding to vertices in &amp;lt;math&amp;gt;V_2&amp;lt;/math&amp;gt;, is an eigenvector corresponding to the eigenvalue &amp;lt;math&amp;gt;-\lambda&amp;lt;/math&amp;gt;. It follows that the spectrum of a bipartite graph is symmetric with respect to 0.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= The spectral gap =&lt;br /&gt;
It turns out that the second largest eigenvalue of a graph contains important information about the graph&#039;s expansion parameter. The following theorem is the so-called Cheeger&#039;s inequality.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Cheeger&#039;s inequality)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; be a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph with spectrum &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\frac{d-\lambda_2}{2}\le \phi(G) \le \sqrt{2d(d-\lambda_2)}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
The theorem was first stated for Riemannian manifolds, and was proved by Cheeger and Buser (for different directions of the inequalities). The discrete case is proved independently by Dodziuk and Alon-Milman.&lt;br /&gt;
&lt;br /&gt;
For a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph, the value &amp;lt;math&amp;gt;(d-\lambda_2)&amp;lt;/math&amp;gt; is known as the &#039;&#039;&#039;spectral gap&#039;&#039;&#039;. The name is due to the fact that it is the gap between the first and the second eigenvalue in the spectrum of a graph. The spectral gap provides an estimate on the expansion ratio of a graph. More precisely, a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph has large expansion ratio (thus being an expander) if the spectral gap is large.&lt;br /&gt;
&lt;br /&gt;
If we write &amp;lt;math&amp;gt;\alpha=1-\frac{\lambda_2}{d}&amp;lt;/math&amp;gt; (sometimes it is called the normalized spectral gap), the Cheeger&#039;s inequality is turned into a nicer form:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\frac{\alpha}{2}\le \frac{\phi}{d}\le\sqrt{2\alpha} &amp;lt;/math&amp;gt; or equivalently &amp;lt;math&amp;gt;&lt;br /&gt;
\frac{1}{2}\left(\frac{\phi}{d}\right)^2\le \alpha\le 2\left(\frac{\phi}{d}\right)&lt;br /&gt;
&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
We will not prove the theorem, but we will explain briefly why it works.&lt;br /&gt;
&lt;br /&gt;
For the spectra of graphs, the Cheeger&#039;s inequality is proved by the [http://en.wikipedia.org/wiki/Min-max_theorem Courant-Fischer theorem] in linear algebra. The Courant-Fischer theorem is a fundamental theorem in linear algebra which characterizes the eigenvalues by a series of optimizations:&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Courant-Fischer theorem)|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; be a symmetric matrix with eigenvalues &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;. Then&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\lambda_k&lt;br /&gt;
&amp;amp;=\max_{v_1,v_2,\ldots,v_{n-k}\in \mathbb{R}^n}\min_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{n-k}}}\frac{x^TAx}{x^Tx}\\&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\min_{v_1,v_2,\ldots,v_{k-1}\in \mathbb{R}^n}\max_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{k-1}}}\frac{x^TAx}{x^Tx}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
For a &amp;lt;math&amp;gt;d&amp;lt;/math&amp;gt;-regular graph with adjacency matrix &amp;lt;math&amp;gt;A&amp;lt;/math&amp;gt; and spectrum &amp;lt;math&amp;gt;\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n&amp;lt;/math&amp;gt;, its largest eigenvalue &amp;lt;math&amp;gt;\lambda_1=d&amp;lt;/math&amp;gt; with eigenvector &amp;lt;math&amp;gt;A\cdot\mathbf{1}=d\mathbf{1}&amp;lt;/math&amp;gt;. According to the Courant-Fischer theorem, the second largest eigenvalue can be computed as&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\lambda_2=\max_{x\bot \mathbf{1}}\frac{x^TAx}{x^Tx},&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
d-\lambda_2=\min_{x\bot \mathbf{1}}\frac{x^T(dI-A)x}{x^Tx}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
The later is an optimization, which shares some resemblance of the expansion ratio &amp;lt;math&amp;gt;\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}}\frac{|\partial S|}{|S|}=\min_{\chi_S}\frac{\chi_S^T(dI-A)\chi_S}{\chi_S^T\chi_S}&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\chi_S&amp;lt;/math&amp;gt; is the &#039;&#039;&#039;characteristic vector&#039;&#039;&#039; of the set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, defined as &amp;lt;math&amp;gt;\chi_S(i)=1&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i\in S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi_S(i)=0&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;i\not\in S&amp;lt;/math&amp;gt;. It is not hard to verify that &amp;lt;math&amp;gt;\chi_S^T\chi_S=\sum_{i}\chi_S(i)=|S|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\chi_S^T(dI-A)\chi_S=\sum_{i\sim j}(\chi_S(i)-\chi_S(j))^2=|\partial S|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, the spectral gap &amp;lt;math&amp;gt;d-\lambda_2&amp;lt;/math&amp;gt; and the expansion ratio &amp;lt;math&amp;gt;\phi(G)&amp;lt;/math&amp;gt; both involve some optimizations with the similar forms. It explains why they can be used to approximate each other.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4457</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4457"/>
		<updated>2011-07-19T15:49:44Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms, On-line Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/On-line Algorithms|On-line Algorithms]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4456</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4456"/>
		<updated>2011-07-19T15:47:57Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4455</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4455"/>
		<updated>2011-07-19T15:45:36Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4454</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4454"/>
		<updated>2011-07-19T15:44:05Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Dynamics on Spins|Dynamics on Spins]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The #P Class and Approximation|The #P Class and Approximation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Sampling and Counting|Sampling and Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Canonical Paths|Canonical Paths]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Count Matchings|Count Matchings]]&lt;br /&gt;
# MCMC&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Simulated Annealing|Simulated Annealing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Volume Estimation|Volume Estimation]]&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4453</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4453"/>
		<updated>2011-07-19T15:20:22Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs I&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
# Expander Graphs II&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Zig-Zag Product|The Zig-Zag Product]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/USTCON in LOGSPACE|USTCON in LOGSPACE]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Connectivity&amp;diff=4618</id>
		<title>随机算法 (Fall 2011)/Graph Connectivity</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Graph_Connectivity&amp;diff=4618"/>
		<updated>2011-07-19T15:06:40Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;USTCON stands for &amp;#039;&amp;#039;&amp;#039;undirected &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;-&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; connectivity&amp;#039;&amp;#039;&amp;#039;. It is the problem which asks whether there is a path from vertex &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to vertex &amp;lt;math&amp;gt;t&amp;lt;/…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;USTCON stands for &#039;&#039;&#039;undirected &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;-&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; connectivity&#039;&#039;&#039;. It is the problem which asks whether there is a path from vertex &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to vertex &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in a given undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;. This problem is an abstraction of various search problems in graphs, and has theoretical significance in complexity theory.&lt;br /&gt;
&lt;br /&gt;
The problem can be solved deterministically by traversing the graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;, which takes &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; extra space to keep track of which vertices have been visited, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;. And the following theorem is implied by the upper bound on the cover time.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Aleliunas-Karp-Lipton-Lovász-Rackoff 1979)|&lt;br /&gt;
: USTCON can be solved by a polynomial time Monte Carlo randomized algorithm with bounded one-sided error, which uses &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; extra space.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The algorithm is a random walk starting at &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. If the walk reaches &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;4n^3&amp;lt;/math&amp;gt; steps, then return &amp;quot;yes&amp;quot;, otherwise return &amp;quot;no&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
It is obvious that if &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; are disconnected, the random walk from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; can never reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, thus the algorithm always returns &amp;quot;no&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
We know that for an undirected &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, the cover time is &amp;lt;math&amp;gt;&amp;lt;4nm&amp;lt;2n^2&amp;lt;/math&amp;gt;. So if &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; are connected, the expected time to reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;&amp;lt;2n^3&amp;lt;/math&amp;gt;. By Markov&#039;s inequality, the probability that it takes longer than &amp;lt;math&amp;gt;4n^3&amp;lt;/math&amp;gt; steps to reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;&amp;lt;1/2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The random walk use &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; bits to store the current position, and another &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; bits to count the number of steps. So the total space used by the algorithm inaddition to the input is &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This shows that USTCON is in the complexity class [http://qwiki.stanford.edu/wiki/Complexity_Zoo:R#rl RL] (randomized log-space).&lt;br /&gt;
&lt;br /&gt;
;Story in complexity theory&lt;br /&gt;
If the randomness if forbidden, it is known that USTCON can be solved nondeterministically in logarithmic space, thus USTCON is in [http://qwiki.stanford.edu/wiki/Complexity_Zoo:N#nl NL]. In fact USTCON is complete for the symmetric version of nondeterministic log-space. That is, every problem in the class of [http://qwiki.stanford.edu/wiki/Complexity_Zoo:S#sl SL] can be solved by USTCON via log-space reductions. Therefore, USTCON&amp;lt;math&amp;gt;\in&amp;lt;/math&amp;gt;RL implies that SL&amp;lt;math&amp;gt;\subseteq&amp;lt;/math&amp;gt;RL.&lt;br /&gt;
&lt;br /&gt;
In 2004, Reingold shows that USTCON can be solved deterministically in log-space, which proves SL=L. The deterministic algorithm for USTCON is by the derandomization of random walk.&lt;br /&gt;
&lt;br /&gt;
It is conjectured that RL=L, but this is still open.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Walks_on_Undirected_Graphs&amp;diff=4606</id>
		<title>随机算法 (Fall 2011)/Random Walks on Undirected Graphs</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Walks_on_Undirected_Graphs&amp;diff=4606"/>
		<updated>2011-07-19T15:06:15Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* USTCON */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A &#039;&#039;&#039;walk&#039;&#039;&#039; on a graph &amp;lt;math&amp;gt;G = (V,E)&amp;lt;/math&amp;gt; is a sequence of vertices &amp;lt;math&amp;gt;v_1, v_2, \ldots \in V&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;v_{i+1}&amp;lt;/math&amp;gt; is a neighbor of &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; for every index &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;v_{i+1}&amp;lt;/math&amp;gt; is selected uniformly at random from among &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt;’s neighbors, independently for every &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;, this is called a &#039;&#039;&#039;random walk&#039;&#039;&#039; on &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
We consider the special case that &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; is an undirected graph, and denote that &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|E|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A Markov chain is defined by this random walk, with the vertex set &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; as the state space, and &lt;br /&gt;
the transition matrix &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt;, which is defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P(u,v)=\begin{cases}&lt;br /&gt;
\frac{1}{d(u)}&amp;amp;\mbox{if }(u,v)\in E,\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise },&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;d(u)&amp;lt;/math&amp;gt; denotes the degree of vertex &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that unlike the PageRank example, now the probability depends on &amp;lt;math&amp;gt;d(u)&amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt;d_+(u)&amp;lt;/math&amp;gt;. This is because the graph is undirected.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Proposition|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; be the Markov chain defined as above.&lt;br /&gt;
:*&amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; is irreducible if and only if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected. &lt;br /&gt;
:*&amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; is aperiodic if and only if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is non-bipartite.&lt;br /&gt;
}}&lt;br /&gt;
We leave the proof as an exercise.&lt;br /&gt;
&lt;br /&gt;
We can just assume &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected, so we do not have to worry about the reducibility any more.&lt;br /&gt;
&lt;br /&gt;
The periodicity of a random walk on a undirected bipartite graph is usually dealt with by the following trick of &amp;quot;lazy&amp;quot; random walk.&lt;br /&gt;
;Lazy random walk&lt;br /&gt;
:Given an undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;, a random walk is defined by the transition matrix&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
P&#039;(u,v)=\begin{cases}&lt;br /&gt;
\frac{1}{2} &amp;amp; \mbox{if }u=v,\\&lt;br /&gt;
\frac{1}{2d(u)}&amp;amp;\mbox{if }(u,v)\in E,\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise }.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:For this random walk, at each step, we first flip a fair coin to decide whether to move or to stay, and if the coin told us to move, we pick a uniform edge and move to the adjacent vertex. It is easy to see that the resulting Markov chain is aperiodic for any &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We consider the non-lazy version of random walk. We observe that the random walk has the following stationary distribution.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem|&lt;br /&gt;
:The random walk on &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;|E|=m&amp;lt;/math&amp;gt; has a stationary distribution &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\forall v\in V&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\pi_v=\frac{d(v)}{2m}\end{align}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| First, since &amp;lt;math&amp;gt;\sum_{v\in V}d(v)=2m&amp;lt;/math&amp;gt;, it follows that&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{v\in V}\pi_v=\sum_{v\in V}\frac{d(v)}{2m}=1,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
thus &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; is a well-defined distribution.&lt;br /&gt;
And let &amp;lt;math&amp;gt;N(v)&amp;lt;/math&amp;gt; denote the set of neighbors of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. Then for any &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
(\pi P)_v=\sum_{u\in V}\pi_uP(u,v)=\sum_{u\in N(v)}\frac{d(u)}{2m}\frac{1}{d(u)}=\frac{1}{2m}\sum_{u\in N(v)}1=\frac{d(v)}{2m}=\pi_v.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Thus &amp;lt;math&amp;gt;\pi P=\pi&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; is stationary.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
For connected and non-bipartite &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, the random walk converges to this stationary distribution. Note that the stationary distribution is proportional to the degrees of the vertices, therefore, if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is a regular graph, that is, &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; is the same for all &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;, the stationary distribution is the uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The following parameters of random walks are closely related to the performances of randomized algorithms based on random walks:&lt;br /&gt;
* &#039;&#039;&#039;Hitting time&#039;&#039;&#039;: It takes how long for a random walk to visit some specific vertex.&lt;br /&gt;
* &#039;&#039;&#039;Cover time&#039;&#039;&#039;: It takes how long for a random walk to visit all vertices.&lt;br /&gt;
* &#039;&#039;&#039;Mixing time&#039;&#039;&#039;: It takes how long for a random walk to be close enough to the stationary distribution.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Cover_Time&amp;diff=4617</id>
		<title>随机算法 (Fall 2011)/Cover Time</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Cover_Time&amp;diff=4617"/>
		<updated>2011-07-19T15:06:01Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;For any &amp;lt;math&amp;gt;u,v\in V&amp;lt;/math&amp;gt;, the &amp;#039;&amp;#039;&amp;#039;hitting time&amp;#039;&amp;#039;&amp;#039; &amp;lt;math&amp;gt;\tau_{u,v}&amp;lt;/math&amp;gt; is the expected number of steps before vertex &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is visited, starting from vertex &amp;lt;math&amp;gt;…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;For any &amp;lt;math&amp;gt;u,v\in V&amp;lt;/math&amp;gt;, the &#039;&#039;&#039;hitting time&#039;&#039;&#039; &amp;lt;math&amp;gt;\tau_{u,v}&amp;lt;/math&amp;gt; is the expected number of steps before vertex &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; is visited, starting from vertex &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Recall that any irreducible aperiodic Markov chain with finite state space converges to the unique stationary distribution &lt;br /&gt;
&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\pi_v=\frac{1}{\tau_{v,v}}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Combining with what we know about the stationary distribution &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; of a random walk on an undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; where &amp;lt;math&amp;gt;|E|=m&amp;lt;/math&amp;gt;, we have that for any vertex &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\tau_{v,v}=\frac{1}{\pi_v}=\frac{2m}{d(v)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
This fact can be used to estimate the hitting time between two adjacent vertices.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma|&lt;br /&gt;
:For a random walk on an undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; where&amp;lt;math&amp;gt;|E|=m&amp;lt;/math&amp;gt;, for any &amp;lt;math&amp;gt;u,v\in V&amp;lt;/math&amp;gt; that &amp;lt;math&amp;gt;(u,v)\in E&amp;lt;/math&amp;gt;, the mean hitting time &amp;lt;math&amp;gt;\begin{align}\tau_{u,v}&amp;lt;2m\end{align}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| The proof is by double counting.  We know that &lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\tau_{v,v}=\frac{1}{\pi_v}=\frac{2m}{d(v)}. &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Let &amp;lt;math&amp;gt;N(v)&amp;lt;/math&amp;gt; be the set of neighbors of vertex &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;.  We run the random walk from &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; for one step, and by the law of total expectation,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\tau_{v,v}=\sum_{w\in N(v)}\frac{1}{d(v)}(1+\tau_{w,v}). &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Combining the above two equations, we have &lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
2m=\sum_{w\in N(v)}(1+\tau_{w,v}),&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
which implies that &amp;lt;math&amp;gt;\tau_{u,v}&amp;lt;2m&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Note that the lemma holds only for the adjacent &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. With this lemma, we can prove an upper bound on the cover time.&lt;br /&gt;
*Let &amp;lt;math&amp;gt;C_u&amp;lt;/math&amp;gt; be the expected number of steps taken by a random walk which starts at &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; to visit every vertex in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; at least once. The &#039;&#039;&#039;cover time&#039;&#039;&#039; of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, denoted &amp;lt;math&amp;gt;C(G)&amp;lt;/math&amp;gt; is defined as &amp;lt;math&amp;gt;C(G)=\max_{u\in V}C_u&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (cover time)|&lt;br /&gt;
:For any connected undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;|V|=n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;|E|=m&amp;lt;/math&amp;gt;, the cover time &amp;lt;math&amp;gt;C(G)\le 4nm&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| Let &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; be an arbitrary spanning tree of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. There exists an Eulerian tour of &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; in which each edge is traversed once in each direction. Let &amp;lt;math&amp;gt;v_1\rightarrow v_2\rightarrow \cdots \rightarrow v_{2(n-1)}\rightarrow v_{2n-1}=v_1&amp;lt;/math&amp;gt; be such a tour. Clearly the expected time to go through the vertices in the tour is an upper bound on the cover time. Hence,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
C(G)\le\sum_{i=1}^{2(n-1)}\tau_{v_{i},v_{i+1}}&amp;lt;2(n-1)\cdot 2m&amp;lt;4nm.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
A tighter bound (with a smaller constant factor) can be proved with a more careful analysis. Please read the textbook [MR].&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Walks_on_Undirected_Graphs&amp;diff=4605</id>
		<title>随机算法 (Fall 2011)/Random Walks on Undirected Graphs</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Random_Walks_on_Undirected_Graphs&amp;diff=4605"/>
		<updated>2011-07-19T15:05:32Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Hitting and covering */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A &#039;&#039;&#039;walk&#039;&#039;&#039; on a graph &amp;lt;math&amp;gt;G = (V,E)&amp;lt;/math&amp;gt; is a sequence of vertices &amp;lt;math&amp;gt;v_1, v_2, \ldots \in V&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;v_{i+1}&amp;lt;/math&amp;gt; is a neighbor of &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt; for every index &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;. When &amp;lt;math&amp;gt;v_{i+1}&amp;lt;/math&amp;gt; is selected uniformly at random from among &amp;lt;math&amp;gt;v_i&amp;lt;/math&amp;gt;’s neighbors, independently for every &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;, this is called a &#039;&#039;&#039;random walk&#039;&#039;&#039; on &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
We consider the special case that &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; is an undirected graph, and denote that &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;m=|E|&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
A Markov chain is defined by this random walk, with the vertex set &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; as the state space, and &lt;br /&gt;
the transition matrix &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt;, which is defined as follows:&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
P(u,v)=\begin{cases}&lt;br /&gt;
\frac{1}{d(u)}&amp;amp;\mbox{if }(u,v)\in E,\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise },&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;d(u)&amp;lt;/math&amp;gt; denotes the degree of vertex &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Note that unlike the PageRank example, now the probability depends on &amp;lt;math&amp;gt;d(u)&amp;lt;/math&amp;gt; instead of &amp;lt;math&amp;gt;d_+(u)&amp;lt;/math&amp;gt;. This is because the graph is undirected.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Proposition|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; be the Markov chain defined as above.&lt;br /&gt;
:*&amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; is irreducible if and only if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected. &lt;br /&gt;
:*&amp;lt;math&amp;gt;M_G&amp;lt;/math&amp;gt; is aperiodic if and only if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is non-bipartite.&lt;br /&gt;
}}&lt;br /&gt;
We leave the proof as an exercise.&lt;br /&gt;
&lt;br /&gt;
We can just assume &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is connected, so we do not have to worry about the reducibility any more.&lt;br /&gt;
&lt;br /&gt;
The periodicity of a random walk on a undirected bipartite graph is usually dealt with by the following trick of &amp;quot;lazy&amp;quot; random walk.&lt;br /&gt;
;Lazy random walk&lt;br /&gt;
:Given an undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;, a random walk is defined by the transition matrix&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
P&#039;(u,v)=\begin{cases}&lt;br /&gt;
\frac{1}{2} &amp;amp; \mbox{if }u=v,\\&lt;br /&gt;
\frac{1}{2d(u)}&amp;amp;\mbox{if }(u,v)\in E,\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise }.&lt;br /&gt;
\end{cases}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
:For this random walk, at each step, we first flip a fair coin to decide whether to move or to stay, and if the coin told us to move, we pick a uniform edge and move to the adjacent vertex. It is easy to see that the resulting Markov chain is aperiodic for any &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We consider the non-lazy version of random walk. We observe that the random walk has the following stationary distribution.&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem|&lt;br /&gt;
:The random walk on &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;|E|=m&amp;lt;/math&amp;gt; has a stationary distribution &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt;, where &amp;lt;math&amp;gt;\forall v\in V&amp;lt;/math&amp;gt;,&lt;br /&gt;
::&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
\pi_v=\frac{d(v)}{2m}\end{align}&amp;lt;/math&amp;gt; for &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| First, since &amp;lt;math&amp;gt;\sum_{v\in V}d(v)=2m&amp;lt;/math&amp;gt;, it follows that&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\sum_{v\in V}\pi_v=\sum_{v\in V}\frac{d(v)}{2m}=1,&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
thus &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; is a well-defined distribution.&lt;br /&gt;
And let &amp;lt;math&amp;gt;N(v)&amp;lt;/math&amp;gt; denote the set of neighbors of &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. Then for any &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;,&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
(\pi P)_v=\sum_{u\in V}\pi_uP(u,v)=\sum_{u\in N(v)}\frac{d(u)}{2m}\frac{1}{d(u)}=\frac{1}{2m}\sum_{u\in N(v)}1=\frac{d(v)}{2m}=\pi_v.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
Thus &amp;lt;math&amp;gt;\pi P=\pi&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;\pi&amp;lt;/math&amp;gt; is stationary.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
For connected and non-bipartite &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, the random walk converges to this stationary distribution. Note that the stationary distribution is proportional to the degrees of the vertices, therefore, if &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is a regular graph, that is, &amp;lt;math&amp;gt;d(v)&amp;lt;/math&amp;gt; is the same for all &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt;, the stationary distribution is the uniform distribution.&lt;br /&gt;
&lt;br /&gt;
The following parameters of random walks are closely related to the performances of randomized algorithms based on random walks:&lt;br /&gt;
* &#039;&#039;&#039;Hitting time&#039;&#039;&#039;: It takes how long for a random walk to visit some specific vertex.&lt;br /&gt;
* &#039;&#039;&#039;Cover time&#039;&#039;&#039;: It takes how long for a random walk to visit all vertices.&lt;br /&gt;
* &#039;&#039;&#039;Mixing time&#039;&#039;&#039;: It takes how long for a random walk to be close enough to the stationary distribution.&lt;br /&gt;
&lt;br /&gt;
= USTCON =&lt;br /&gt;
USTCON stands for &#039;&#039;&#039;undirected &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;-&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; connectivity&#039;&#039;&#039;. It is the problem which asks whether there is a path from vertex &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; to vertex &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in a given undirected graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;. This problem is an abstraction of various search problems in graphs, and has theoretical significance in complexity theory.&lt;br /&gt;
&lt;br /&gt;
The problem can be solved deterministically by traversing the graph &amp;lt;math&amp;gt;G(V,E)&amp;lt;/math&amp;gt;, which takes &amp;lt;math&amp;gt;\Omega(n)&amp;lt;/math&amp;gt; extra space to keep track of which vertices have been visited, where &amp;lt;math&amp;gt;n=|V|&amp;lt;/math&amp;gt;. And the following theorem is implied by the upper bound on the cover time.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem (Aleliunas-Karp-Lipton-Lovász-Rackoff 1979)|&lt;br /&gt;
: USTCON can be solved by a polynomial time Monte Carlo randomized algorithm with bounded one-sided error, which uses &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; extra space.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
The algorithm is a random walk starting at &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;. If the walk reaches &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;4n^3&amp;lt;/math&amp;gt; steps, then return &amp;quot;yes&amp;quot;, otherwise return &amp;quot;no&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
It is obvious that if &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; are disconnected, the random walk from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; can never reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt;, thus the algorithm always returns &amp;quot;no&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
We know that for an undirected &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, the cover time is &amp;lt;math&amp;gt;&amp;lt;4nm&amp;lt;2n^2&amp;lt;/math&amp;gt;. So if &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; are connected, the expected time to reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;&amp;lt;2n^3&amp;lt;/math&amp;gt;. By Markov&#039;s inequality, the probability that it takes longer than &amp;lt;math&amp;gt;4n^3&amp;lt;/math&amp;gt; steps to reach &amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; from &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;&amp;lt;1/2&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The random walk use &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; bits to store the current position, and another &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt; bits to count the number of steps. So the total space used by the algorithm inaddition to the input is &amp;lt;math&amp;gt;O(\log n)&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
This shows that USTCON is in the complexity class [http://qwiki.stanford.edu/wiki/Complexity_Zoo:R#rl RL] (randomized log-space).&lt;br /&gt;
&lt;br /&gt;
;Story in complexity theory&lt;br /&gt;
If the randomness if forbidden, it is known that USTCON can be solved nondeterministically in logarithmic space, thus USTCON is in [http://qwiki.stanford.edu/wiki/Complexity_Zoo:N#nl NL]. In fact USTCON is complete for the symmetric version of nondeterministic log-space. That is, every problem in the class of [http://qwiki.stanford.edu/wiki/Complexity_Zoo:S#sl SL] can be solved by USTCON via log-space reductions. Therefore, USTCON&amp;lt;math&amp;gt;\in&amp;lt;/math&amp;gt;RL implies that SL&amp;lt;math&amp;gt;\subseteq&amp;lt;/math&amp;gt;RL.&lt;br /&gt;
&lt;br /&gt;
In 2004, Reingold shows that USTCON can be solved deterministically in log-space, which proves SL=L. The deterministic algorithm for USTCON is by the derandomization of random walk.&lt;br /&gt;
&lt;br /&gt;
It is conjectured that RL=L, but this is still open.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4452</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4452"/>
		<updated>2011-07-19T15:02:56Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Electrical Network|Electrical Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Cover Time|Cover Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Connectivity|Graph Connectivity]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 2SAT|Randomized 2SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized 3SAT|Randomized 3SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Perfect Matching in Regular Bipartite Graph|Perfect Matching in Regular Bipartite Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Metropolis Algorithm|The Metropolis Algorithm]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Spin Systems|Spin Systems]]&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4451</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4451"/>
		<updated>2011-07-19T14:33:53Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Max-SAT|Max-SAT]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Linear Programming|Linear Programming]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4450</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4450"/>
		<updated>2011-07-19T08:06:31Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/DNF Counting|DNF Counting]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Mixing Time|Mixing Time]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupling|Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Card Shuffling|Card Shuffling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Path Coupling|Path Coupling]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Coloring|Graph Coloring]]&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Graphs|Expander Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Graph Spectrum|Graph Spectrum]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Expander Mixing Lemma|Expander Mixing Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walk on Expander Graph|Random Walk on Expander Graph]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound for Expander Walks|Chernoff Bound for Expander Walks]]&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4444</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4444"/>
		<updated>2011-07-19T05:39:57Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability Basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and Bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov&#039;s Inequality|Markov&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Pair-wise Independence|Pair-wise Independence]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Two-Point Sampling|Derandomization: Two-Point Sampling]]&lt;br /&gt;
# Chernoff Bound&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Parameter Estimation|Parameter Estimation]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Azuma&#039;s Inequality|Azuma&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Markov Chains|Markov Chains]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Walks on Undirected Graphs|Random Walks on Undirected Graphs]]&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Min-Cut&amp;diff=4565</id>
		<title>随机算法 (Fall 2011)/Randomized Min-Cut</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Min-Cut&amp;diff=4565"/>
		<updated>2011-07-19T02:31:02Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;Let &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt; be a graph. Suppose that we want to partition the vertex set &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; into two parts &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; such that the number of &amp;#039;&amp;#039;cr…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Let &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt; be a graph. Suppose that we want to partition the vertex set &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; into two parts &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;T&amp;lt;/math&amp;gt; such that the number of &#039;&#039;crossing edges&#039;&#039;, edges with one endpoint in each part, is as small as possible. This can be described as the following problem: the min-cut problem.&lt;br /&gt;
&lt;br /&gt;
For a connected graph &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt;, a &#039;&#039;&#039;cut&#039;&#039;&#039; is a set &amp;lt;math&amp;gt;C\subseteq E&amp;lt;/math&amp;gt; of edges, removal of which causes &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; becomes disconnected. The min-cut problem is to find the cut with minimum cardinality. A canonical deterministic algorithm for this problem is through the [http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem max-flow min-cut theorem]. A global minimum cut is the minimum &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;-&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; min-cut, which is equal to the minimum &amp;lt;math&amp;gt;s&amp;lt;/math&amp;gt;-&amp;lt;math&amp;gt;t&amp;lt;/math&amp;gt; max-flow.&lt;br /&gt;
&lt;br /&gt;
Do we have to rely on the &amp;quot;advanced&amp;quot; tools like flows? The answer is &amp;quot;no&amp;quot;, with a little help of randomness.&lt;br /&gt;
&lt;br /&gt;
= Karger&#039;s Min-Cut Algorithm =&lt;br /&gt;
We will introduce an extremely simple algorithm discovered by [http://people.csail.mit.edu/karger/ David Karger]. The algorithm works on multigraphs, graphs allowing multiple edges between vertices. &lt;br /&gt;
&lt;br /&gt;
We define an operation on multigraphs called &#039;&#039;contraction&#039;&#039;:&lt;br /&gt;
For a multigraph &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt;, for any edge &amp;lt;math&amp;gt;uv\in E&amp;lt;/math&amp;gt;, let &amp;lt;math&amp;gt;contract(G,uv)&amp;lt;/math&amp;gt; be a new multigraph constructed as follows: &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt; are replaced by a singe new vertex whose neighbors are all the old neighbors of &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt;. In other words, &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are merged into one vertex. The old edges between &amp;lt;math&amp;gt;u&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; are deleted.&lt;br /&gt;
&lt;br /&gt;
Karger&#039;s min-cut algorithm is described as follows:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;MinCut(multigraph &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt;)&#039;&#039;&#039;&lt;br /&gt;
* while &amp;lt;math&amp;gt;|V|&amp;gt;2&amp;lt;/math&amp;gt; do&lt;br /&gt;
** choose an edge &amp;lt;math&amp;gt;uv\in E&amp;lt;/math&amp;gt; uniformly at random;&lt;br /&gt;
** &amp;lt;math&amp;gt;G=contract(G,uv)&amp;lt;/math&amp;gt;; &lt;br /&gt;
*return the edges between the only two vertices in &amp;lt;math&amp;gt;V&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
A better way to understand Karger&#039;s min-cut algorithm is to describe it as randomly merging sets of vertices. Initially, each vertex &amp;lt;math&amp;gt;v\in V&amp;lt;/math&amp;gt; corresponds to a singleton set &amp;lt;math&amp;gt;\{v\}&amp;lt;/math&amp;gt;.  At each step, (1) a crossing edge (edge whose endpoints are in different sets) is chosen uniformly at random from all crossing edges; and (2) the two sets connected by the chosen crossing-edge are merged to one set. Repeat this process until there are only two sets. The crossing edges between the two sets are returned.&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
= Analysis =&lt;br /&gt;
For a multigraph &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt;, fixed a minimum cut &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; (there might be more than one minimum cuts), we analyze the probability that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is returned by the MinCut algorithm. &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is returned by MinCut if and only if no edge in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is contracted during the execution of MinCut. We will bound this probability&lt;br /&gt;
&amp;lt;math&amp;gt;\Pr[\mbox{no edge in }C\mbox{ is contracted}]&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma 1|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt; be a multigraph with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices, if the size of the minimum cut of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;|E|\ge nk/2&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| &lt;br /&gt;
:It holds that every vertex has at least &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; neighbors, because if there exists &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; with &amp;lt;math&amp;gt;&amp;lt;k&amp;lt;/math&amp;gt; neighbors, then the &amp;lt;math&amp;gt;&amp;lt;k&amp;lt;/math&amp;gt; edges adjacent to &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; disconnect &amp;lt;math&amp;gt;v&amp;lt;/math&amp;gt; from the rest of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;, forming a cut of size smaller than &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;. Therefore &amp;lt;math&amp;gt;|E|\ge kn/2&amp;lt;/math&amp;gt;.  &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Lemma 2|&lt;br /&gt;
:Let &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt; be a multigraph with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices, and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; a minimum cut of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.  If &amp;lt;math&amp;gt;e\not\in C&amp;lt;/math&amp;gt;, then &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is still a minimum cut of &amp;lt;math&amp;gt;contract(G, e)&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
{{Proof| &lt;br /&gt;
:We first show that no edge in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is lost during the contraction. Due to the definition of contraction, the only edges removed from &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt; in a contraction &amp;lt;math&amp;gt;contract(G, e)&amp;lt;/math&amp;gt; are the parallel-edges sharing both endpoints with &amp;lt;math&amp;gt;e&amp;lt;/math&amp;gt;. Since &amp;lt;math&amp;gt;e\not\in C&amp;lt;/math&amp;gt;, none of these edges can be in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt;, or otherwise &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; cannot be a minimum cut of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. Thus every edge in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; remains in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
:It is then obvious to see that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is a cut of &amp;lt;math&amp;gt;contract(G, e)&amp;lt;/math&amp;gt;. All paths in a contracted graph can be revived in the original multigraph by inserting the contracted edges into the path, thus a connected &amp;lt;math&amp;gt;contract(G, e)-C&amp;lt;/math&amp;gt; would imply a connected &amp;lt;math&amp;gt;G-C&amp;lt;/math&amp;gt;, which contradicts that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is a cut in &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
:Notice that a cut in a contracted graph must be a cut in the original graph. This can be easily verified by seeing contraction as taking the union of two sets of vertices. Therefore a contraction can never reduce the size of minimum cuts of a multigraph. A minimum cut &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; must still be a minimum cut in the contracted graph as long as it is still a cut.&lt;br /&gt;
&lt;br /&gt;
:Concluding the above arguments, we have that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is a minimum cut of &amp;lt;math&amp;gt;contract(G, e)&amp;lt;/math&amp;gt; for any &amp;lt;math&amp;gt;e\not\in C&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;G(V, E)&amp;lt;/math&amp;gt; be a multigraph, and &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; a minimum cut of &amp;lt;math&amp;gt;G&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Initially &amp;lt;math&amp;gt;|V|=n&amp;lt;/math&amp;gt;.&lt;br /&gt;
After &amp;lt;math&amp;gt;(i-1)&amp;lt;/math&amp;gt; contractions,  denote the current multigraph as &amp;lt;math&amp;gt;G_i(V_i, E_i)&amp;lt;/math&amp;gt;. Suppose that no edge in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; has been chosen to be contracted yet. According to Lemma 2, &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; must be a minimum cut of the &amp;lt;math&amp;gt;G_i&amp;lt;/math&amp;gt;. Then due to Lemma 1, the current edge number is &amp;lt;math&amp;gt;|E_i|\ge |V_i||C|/2&amp;lt;/math&amp;gt;. Uniformly choosing an edge &amp;lt;math&amp;gt;e\in E_i&amp;lt;/math&amp;gt; to contract, the probability that the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th contraction contracts an edge in &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is given by:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}\Pr_{e\in E_i}[e\in C] &amp;amp;= \frac{|C|}{|E_i|} &lt;br /&gt;
&amp;amp;\le |C|\cdot\frac{2}{|V_i||C|}&lt;br /&gt;
&amp;amp;= \frac{2}{|V_i|}.\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, assuming that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; is intact after &amp;lt;math&amp;gt;(i-1)&amp;lt;/math&amp;gt; contractions, the probability that &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; survives the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th contraction is at least &amp;lt;math&amp;gt;1-2/|V_i|&amp;lt;/math&amp;gt;. Note that &amp;lt;math&amp;gt;|V_i|=n-i+1&amp;lt;/math&amp;gt;, because each contraction decrease the vertex number by 1.&lt;br /&gt;
&lt;br /&gt;
In each iteration, the contracted edge is &#039;&#039;&#039;independently&#039;&#039;&#039; chosen from the current graph. The probability that the minimum cut &amp;lt;math&amp;gt;C&amp;lt;/math&amp;gt; survives all &amp;lt;math&amp;gt;(n-2)&amp;lt;/math&amp;gt; contractions is at least&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\prod_{i=1}^{n-2}\left(1-\frac{2}{|V_i|}\right) &lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{i=1}^{n-2}\left(1-\frac{2}{n-i+1}\right)\\&lt;br /&gt;
&amp;amp;=&lt;br /&gt;
\prod_{k=3}^{n}\frac{k-2}{k}\\&lt;br /&gt;
&amp;amp;= \frac{2}{n(n-1)}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Therefore, we prove the following theorem,&lt;br /&gt;
&lt;br /&gt;
{{Theorem&lt;br /&gt;
|Theorem|&lt;br /&gt;
: For any multigraph with &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; vertices, the MinCut algorithm returns a minimum cut with probability at least &amp;lt;math&amp;gt;\frac{2}{n(n-1)}&amp;lt;/math&amp;gt;.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Run MinCut independently for &amp;lt;math&amp;gt;n(n-1)/2&amp;lt;/math&amp;gt; times and return the smallest cut returned. The probability that this the minimum cut is found is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
1-\Pr[\mbox{failed every time}] &amp;amp;= 1-\Pr[\mbox{MinCut fails}]^{n(n-1)/2} \\&lt;br /&gt;
&amp;amp;\ge 1- \left(1-\frac{2}{n(n-1)}\right)^{n(n-1)/2} \\&lt;br /&gt;
&amp;amp;\ge 1-\frac{1}{e}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A constant probability!&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4563</id>
		<title>随机算法 (Fall 2011)/Randomized Quicksort</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4563"/>
		<updated>2011-07-19T02:29:53Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following is the pseudocode of the famous [http://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm, whose input is a set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of numbers.&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** pick an element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the &#039;&#039;pivot&#039;&#039;;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
The time complexity of this sorting algorithm is measured by the &#039;&#039;&#039;number of comparisons&#039;&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
For the &#039;&#039;&#039;deterministic&#039;&#039;&#039; quicksort algorithm, the pivot element is usually the element in a fixed position (e.g. the first one) of the &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. This will make the worst-case time complexity &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt;, which means there exists a bad case &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, sorting which will cost us &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt; comparisons, &#039;&#039;every time&#039;&#039;!&lt;br /&gt;
&lt;br /&gt;
It is just so unfair to have an unbeatable input for this brilliant algorithm. So we tweak the algorithm a little bit:&lt;br /&gt;
= Algorithm: RandQSort =&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** &#039;&#039;uniformly&#039;&#039; pick a &#039;&#039;random&#039;&#039; element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the pivot;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
= Analysis =&lt;br /&gt;
Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation].&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; denote the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th smallest element in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
Let &amp;lt;math&amp;gt;X_{ij}\in\{0,1\}&amp;lt;/math&amp;gt; be the random variable which indicates whether &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared during the execution of RandQSort. That is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
X_{ij} &amp;amp;=&lt;br /&gt;
\begin{cases}&lt;br /&gt;
1 &amp;amp; a_i\mbox{ and }a_j\mbox{ are compared}\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise}&lt;br /&gt;
\end{cases}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Elements &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observation:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 1:  Every pair of &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared at most once.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Therefore the sum of &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt; for all pair &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt; gives the total number of comparisons. The expected number of comparisons is &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right]&amp;lt;/math&amp;gt;. Due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation], &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] = \sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt;. &lt;br /&gt;
Our next step is to analyze &amp;lt;math&amp;gt;\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt; for each &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By the definition of expectation and &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt;, &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[X_{ij}\right] &lt;br /&gt;
&amp;amp;= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\&lt;br /&gt;
&amp;amp;= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We are going to bound this probability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 2: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared if and only if one of them is chosen as pivot when they are still in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is easy to verify: just check the algorithm. The next one is a bit complicated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 3: If &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are still in the same subset then all &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We can verify this by induction. Initially, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; itself has the property described above; and partitioning any &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; with the property into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; will preserve the property for both &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;. Therefore Claim 3 holds.&lt;br /&gt;
&lt;br /&gt;
Combining Claim 2 and 3, we have:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 4: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt; is chosen from &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
And apparently,&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 5: Every one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen equal-probably.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is because our RandQSort chooses the pivot &#039;&#039;uniformly at random&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Claim 4 and Claim 5 together imply:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[a_i\mbox{ and }a_j\mbox{ are compared}]&lt;br /&gt;
&amp;amp;\le \frac{2}{j-i+1}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|&#039;&#039;&#039;Remark:&#039;&#039;&#039; Perhaps you feel confused about the above argument. You may ask: &amp;quot;&#039;&#039;The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?&#039;&#039;&amp;quot; Good question! Let&#039;s see what really happens by looking closely.&lt;br /&gt;
&lt;br /&gt;
For any pair &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt;, initially &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are all in the same set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (obviously!). During the execution of the algorithm, the set which containing &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are shrinking (due to the pivoting), until one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt;. So we really care about &amp;quot;the last&amp;quot; pivoting before &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is split.&lt;br /&gt;
&lt;br /&gt;
Formally, let &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; be the random variable denoting the pivot element. We know that for each &amp;lt;math&amp;gt;a_k\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y=a_k&amp;lt;/math&amp;gt; with the same probability, and &amp;lt;math&amp;gt;Y\not\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; with an unknown probability (remember that there might be other elements in the same subset with &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;). The probability we are looking for is actually &lt;br /&gt;
&amp;lt;math&amp;gt;\Pr[Y\in \{a_i, a_j\}\mid Y\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}]&amp;lt;/math&amp;gt;, which is always &amp;lt;math&amp;gt;\frac{2}{j-i+1}&amp;lt;/math&amp;gt;, provided that &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; is uniform over &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;conditional probability&#039;&#039;&#039; rules out the &#039;&#039;irrelevant&#039;&#039; events in a probabilistic argument.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Summing all up:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] &lt;br /&gt;
&amp;amp;= &lt;br /&gt;
\sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{j&amp;gt;i}\frac{2}{j-i+1}\\&lt;br /&gt;
&amp;amp;= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} &amp;amp; &amp;amp; (\mbox{Let }k=j-i+1)\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\&lt;br /&gt;
&amp;amp;= 2n\sum_{k=1}^{n}\frac{1}{k}\\&lt;br /&gt;
&amp;amp;= 2n H(n).&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(n)&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;th [http://en.wikipedia.org/wiki/Harmonic_number Harmonic number]. It holds that&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}H(n) = \ln n+O(1)\end{align}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, for an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; numbers, the expected number of comparisons taken by RandQSort to sort &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathrm{O}(n\log n)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4562</id>
		<title>随机算法 (Fall 2011)/Randomized Quicksort</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4562"/>
		<updated>2011-07-19T02:29:28Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following is the pseudocode of the famous [http://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm, whose input is a set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of numbers.&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** pick an element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the &#039;&#039;pivot&#039;&#039;;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
The time complexity of this sorting algorithm is measured by the &#039;&#039;&#039;number of comparisons&#039;&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
For the &#039;&#039;&#039;deterministic&#039;&#039;&#039; quicksort algorithm, the pivot element is usually the element in a fixed position (e.g. the first one) of the &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. This will make the worst-case time complexity &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt;, which means there exists a bad case &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, sorting which will cost us &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt; comparisons, &#039;&#039;every time&#039;&#039;!&lt;br /&gt;
&lt;br /&gt;
It is just so unfair to have an unbeatable input for this brilliant algorithm. So we tweak the algorithm a little bit:&lt;br /&gt;
== Algorithm: RandQSort ==&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** &#039;&#039;uniformly&#039;&#039; pick a &#039;&#039;random&#039;&#039; element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the pivot;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
== Analysis ==&lt;br /&gt;
Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation].&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; denote the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th smallest element in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
Let &amp;lt;math&amp;gt;X_{ij}\in\{0,1\}&amp;lt;/math&amp;gt; be the random variable which indicates whether &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared during the execution of RandQSort. That is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
X_{ij} &amp;amp;=&lt;br /&gt;
\begin{cases}&lt;br /&gt;
1 &amp;amp; a_i\mbox{ and }a_j\mbox{ are compared}\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise}&lt;br /&gt;
\end{cases}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Elements &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observation:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 1:  Every pair of &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared at most once.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Therefore the sum of &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt; for all pair &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt; gives the total number of comparisons. The expected number of comparisons is &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right]&amp;lt;/math&amp;gt;. Due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation], &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] = \sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt;. &lt;br /&gt;
Our next step is to analyze &amp;lt;math&amp;gt;\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt; for each &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By the definition of expectation and &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt;, &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[X_{ij}\right] &lt;br /&gt;
&amp;amp;= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\&lt;br /&gt;
&amp;amp;= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We are going to bound this probability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 2: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared if and only if one of them is chosen as pivot when they are still in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is easy to verify: just check the algorithm. The next one is a bit complicated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 3: If &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are still in the same subset then all &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We can verify this by induction. Initially, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; itself has the property described above; and partitioning any &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; with the property into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; will preserve the property for both &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;. Therefore Claim 3 holds.&lt;br /&gt;
&lt;br /&gt;
Combining Claim 2 and 3, we have:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 4: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt; is chosen from &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
And apparently,&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 5: Every one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen equal-probably.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is because our RandQSort chooses the pivot &#039;&#039;uniformly at random&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Claim 4 and Claim 5 together imply:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[a_i\mbox{ and }a_j\mbox{ are compared}]&lt;br /&gt;
&amp;amp;\le \frac{2}{j-i+1}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|&#039;&#039;&#039;Remark:&#039;&#039;&#039; Perhaps you feel confused about the above argument. You may ask: &amp;quot;&#039;&#039;The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?&#039;&#039;&amp;quot; Good question! Let&#039;s see what really happens by looking closely.&lt;br /&gt;
&lt;br /&gt;
For any pair &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt;, initially &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are all in the same set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (obviously!). During the execution of the algorithm, the set which containing &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are shrinking (due to the pivoting), until one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt;. So we really care about &amp;quot;the last&amp;quot; pivoting before &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is split.&lt;br /&gt;
&lt;br /&gt;
Formally, let &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; be the random variable denoting the pivot element. We know that for each &amp;lt;math&amp;gt;a_k\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y=a_k&amp;lt;/math&amp;gt; with the same probability, and &amp;lt;math&amp;gt;Y\not\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; with an unknown probability (remember that there might be other elements in the same subset with &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;). The probability we are looking for is actually &lt;br /&gt;
&amp;lt;math&amp;gt;\Pr[Y\in \{a_i, a_j\}\mid Y\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}]&amp;lt;/math&amp;gt;, which is always &amp;lt;math&amp;gt;\frac{2}{j-i+1}&amp;lt;/math&amp;gt;, provided that &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; is uniform over &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;conditional probability&#039;&#039;&#039; rules out the &#039;&#039;irrelevant&#039;&#039; events in a probabilistic argument.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Summing all up:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] &lt;br /&gt;
&amp;amp;= &lt;br /&gt;
\sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{j&amp;gt;i}\frac{2}{j-i+1}\\&lt;br /&gt;
&amp;amp;= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} &amp;amp; &amp;amp; (\mbox{Let }k=j-i+1)\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\&lt;br /&gt;
&amp;amp;= 2n\sum_{k=1}^{n}\frac{1}{k}\\&lt;br /&gt;
&amp;amp;= 2n H(n).&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(n)&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;th [http://en.wikipedia.org/wiki/Harmonic_number Harmonic number]. It holds that&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}H(n) = \ln n+O(1)\end{align}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, for an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; numbers, the expected number of comparisons taken by RandQSort to sort &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathrm{O}(n\log n)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4561</id>
		<title>随机算法 (Fall 2011)/Randomized Quicksort</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)/Randomized_Quicksort&amp;diff=4561"/>
		<updated>2011-07-19T02:28:50Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: Created page with &amp;#039;The following is the pseudocode of the famous [http://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm, whose input is a set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of numbers. * if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The following is the pseudocode of the famous [http://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm, whose input is a set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of numbers.&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** pick an element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the &#039;&#039;pivot&#039;&#039;;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
The time complexity of this sorting algorithm is measured by the &#039;&#039;&#039;number of comparisons&#039;&#039;&#039;.  &lt;br /&gt;
&lt;br /&gt;
For the &#039;&#039;&#039;deterministic&#039;&#039;&#039; quicksort algorithm, the pivot element is usually the element in a fixed position (e.g. the first one) of the &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. This will make the worst-case time complexity &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt;, which means there exists a bad case &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;, sorting which will cost us &amp;lt;math&amp;gt;\Omega(n^2)&amp;lt;/math&amp;gt; comparisons, &#039;&#039;every time&#039;&#039;!&lt;br /&gt;
&lt;br /&gt;
It is just so unfair to have an unbeatable input for this brilliant algorithm. So we tweak the algorithm a little bit:&lt;br /&gt;
=== Algorithm: RandQSort ===&lt;br /&gt;
* if &amp;lt;math&amp;gt;|S|&amp;gt;1&amp;lt;/math&amp;gt; do:&lt;br /&gt;
** &#039;&#039;uniformly&#039;&#039; pick a &#039;&#039;random&#039;&#039; element &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; from  &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; as the pivot;&lt;br /&gt;
** partition &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;\{x\}&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;, where all elements in &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; are smaller than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt; and all elements in &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; are  larger than &amp;lt;math&amp;gt;x&amp;lt;/math&amp;gt;;&lt;br /&gt;
** recursively sort &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;;&lt;br /&gt;
&lt;br /&gt;
=== Analysis ===&lt;br /&gt;
Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation].&lt;br /&gt;
&lt;br /&gt;
Let &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; denote the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;th smallest element in &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt;.&lt;br /&gt;
Let &amp;lt;math&amp;gt;X_{ij}\in\{0,1\}&amp;lt;/math&amp;gt; be the random variable which indicates whether &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared during the execution of RandQSort. That is:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
X_{ij} &amp;amp;=&lt;br /&gt;
\begin{cases}&lt;br /&gt;
1 &amp;amp; a_i\mbox{ and }a_j\mbox{ are compared}\\&lt;br /&gt;
0 &amp;amp; \mbox{otherwise}&lt;br /&gt;
\end{cases}.&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Elements &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observation:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 1:  Every pair of &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared at most once.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Therefore the sum of &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt; for all pair &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt; gives the total number of comparisons. The expected number of comparisons is &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right]&amp;lt;/math&amp;gt;. Due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation], &amp;lt;math&amp;gt;\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] = \sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt;. &lt;br /&gt;
Our next step is to analyze &amp;lt;math&amp;gt;\mathbf{E}\left[X_{ij}\right]&amp;lt;/math&amp;gt; for each &amp;lt;math&amp;gt;\{i, j\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
By the definition of expectation and &amp;lt;math&amp;gt;X_{ij}&amp;lt;/math&amp;gt;, &lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[X_{ij}\right] &lt;br /&gt;
&amp;amp;= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\&lt;br /&gt;
&amp;amp;= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}].&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We are going to bound this probability.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 2: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared if and only if one of them is chosen as pivot when they are still in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is easy to verify: just check the algorithm. The next one is a bit complicated.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 3: If &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are still in the same subset then all &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are in the same subset.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
We can verify this by induction. Initially, &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; itself has the property described above; and partitioning any &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; with the property into &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt; will preserve the property for both &amp;lt;math&amp;gt;S_1&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;S_2&amp;lt;/math&amp;gt;. Therefore Claim 3 holds.&lt;br /&gt;
&lt;br /&gt;
Combining Claim 2 and 3, we have:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 4: &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt; are compared only if one of &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt; is chosen from &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
And apparently,&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Claim 5: Every one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen equal-probably.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This is because our RandQSort chooses the pivot &#039;&#039;uniformly at random&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
Claim 4 and Claim 5 together imply:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\Pr[a_i\mbox{ and }a_j\mbox{ are compared}]&lt;br /&gt;
&amp;amp;\le \frac{2}{j-i+1}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{|border=&amp;quot;1&amp;quot;&lt;br /&gt;
|&#039;&#039;&#039;Remark:&#039;&#039;&#039; Perhaps you feel confused about the above argument. You may ask: &amp;quot;&#039;&#039;The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?&#039;&#039;&amp;quot; Good question! Let&#039;s see what really happens by looking closely.&lt;br /&gt;
&lt;br /&gt;
For any pair &amp;lt;math&amp;gt;a_i&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;a_j&amp;lt;/math&amp;gt;, initially &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are all in the same set &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; (obviously!). During the execution of the algorithm, the set which containing &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; are shrinking (due to the pivoting), until one of &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among &amp;lt;math&amp;gt;\{a_i, a_j\}&amp;lt;/math&amp;gt;. So we really care about &amp;quot;the last&amp;quot; pivoting before &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; is split.&lt;br /&gt;
&lt;br /&gt;
Formally, let &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; be the random variable denoting the pivot element. We know that for each &amp;lt;math&amp;gt;a_k\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;, &amp;lt;math&amp;gt;Y=a_k&amp;lt;/math&amp;gt; with the same probability, and &amp;lt;math&amp;gt;Y\not\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt; with an unknown probability (remember that there might be other elements in the same subset with &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;). The probability we are looking for is actually &lt;br /&gt;
&amp;lt;math&amp;gt;\Pr[Y\in \{a_i, a_j\}\mid Y\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}]&amp;lt;/math&amp;gt;, which is always &amp;lt;math&amp;gt;\frac{2}{j-i+1}&amp;lt;/math&amp;gt;, provided that &amp;lt;math&amp;gt;Y&amp;lt;/math&amp;gt; is uniform over &amp;lt;math&amp;gt;\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;conditional probability&#039;&#039;&#039; rules out the &#039;&#039;irrelevant&#039;&#039; events in a probabilistic argument.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Summing all up:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}&lt;br /&gt;
\mathbf{E}\left[\sum_{i=1}^n\sum_{j&amp;gt;i}X_{ij}\right] &lt;br /&gt;
&amp;amp;= &lt;br /&gt;
\sum_{i=1}^n\sum_{j&amp;gt;i}\mathbf{E}\left[X_{ij}\right]\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{j&amp;gt;i}\frac{2}{j-i+1}\\&lt;br /&gt;
&amp;amp;= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} &amp;amp; &amp;amp; (\mbox{Let }k=j-i+1)\\&lt;br /&gt;
&amp;amp;\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\&lt;br /&gt;
&amp;amp;= 2n\sum_{k=1}^{n}\frac{1}{k}\\&lt;br /&gt;
&amp;amp;= 2n H(n).&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math&amp;gt;H(n)&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;th [http://en.wikipedia.org/wiki/Harmonic_number Harmonic number]. It holds that&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;\begin{align}H(n) = \ln n+O(1)\end{align}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Therefore, for an arbitrary input &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; of &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt; numbers, the expected number of comparisons taken by RandQSort to sort &amp;lt;math&amp;gt;S&amp;lt;/math&amp;gt; is &amp;lt;math&amp;gt;\mathrm{O}(n\log n)&amp;lt;/math&amp;gt;.&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4438</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4438"/>
		<updated>2011-07-18T23:44:13Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Theorem|Johnson-Lindenstrauss Theorem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
# Horizon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4437</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4437"/>
		<updated>2011-07-18T16:07:33Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
# Moment and Deviation&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chebyshev&#039;s Inequality|Chebyshev&#039;s Inequality]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Median Selection|Median Selection]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Graphs|Random Graphs]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Chernoff Bound|Chernoff Bound]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Set Balancing|Set Balancing]]&lt;br /&gt;
# Concentration of Measure&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Routing in a Parallel Network|Routing in a Parallel Network]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Martingales|Martingales]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Method of Bounded Differences|The Method of Bounded Differences]]&lt;br /&gt;
# Dimension Reduction&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Johnson-Lindenstrauss Lemma|Johnson-Lindenstrauss Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Locality Sensitive Hashing|Locality Sensitive Hashing]]&lt;br /&gt;
# The Probabilistic Method&lt;br /&gt;
#* [[随机算法 (Fall 2011)/The Probabilistic Method|The Probabilistic Method]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Lovász Local Lemma|Lovász Local Lemma]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Derandomization: Conditional Expectation|Derandomization: Conditional Expectation]] &lt;br /&gt;
# Approximation Algorithms&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Rounding|Randomized Rounding]]&lt;br /&gt;
# Markov Chain and Random Walk&lt;br /&gt;
# Random Walk Algorithms&lt;br /&gt;
# Coupling and Mixing Time&lt;br /&gt;
# Expander Graphs&lt;br /&gt;
# Sampling and Counting&lt;br /&gt;
# MCMC&lt;br /&gt;
# On-line Algorithms&lt;br /&gt;
# Complexity&lt;br /&gt;
# Horizon&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4436</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4436"/>
		<updated>2011-07-18T14:57:19Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# Balls and bins&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distributions of Coin Flipping|Distributions of Coin Flipping]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Birthday Problem|Birthday Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Coupon Collector|Coupon Collector]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Balls-into-balls Occupancy Problem|Balls-into-balls Occupancy Problem]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Bloom Filter|Bloom Filter]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Stable Marriage|Stable Marriage]]&lt;br /&gt;
# Hashing and Fingerprinting&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Tail inequalities|Tail inequalities]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Chernoff bounds| Chernoff bounds]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Martingales | Martingales]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Hashing, limited independence | Hashing, limited independence]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Fingerprinting|Fingerprinting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/The probabilistic method | The probabilistic method]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Markov chains and random walks | Markov chains and random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Expander graphs | Expander graphs]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Random sampling | Random sampling, MCMC]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Approximate counting|Approximate counting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Randomized approximation algorithms|Randomized approximation algorithms]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Distributed algorithms, data streams|Distributed algorithms, data streams]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4435</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4435"/>
		<updated>2011-07-18T13:32:17Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Lecture Notes */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
#*[[随机算法 (Fall 2011)/Complexity Classes|Complexity Classes]]&lt;br /&gt;
# Probability basics&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Probability Space|Probability Space]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Verifying Matrix Multiplication|Verifying Matrix Multiplication]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Conditional Probability|Conditional Probability]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Min-Cut|Randomized Min-Cut]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Random Variables and Expectations|Random Variables and Expectations]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Randomized Quicksort|Randomized Quicksort]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Balls and bins|Balls and bins]]&lt;br /&gt;
#* [[随机算法 (Fall 2011)/Distribution |Balls and bins]]&lt;br /&gt;
&lt;br /&gt;
# [[随机算法 (Fall 2011)/Tail inequalities|Tail inequalities]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Chernoff bounds| Chernoff bounds]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Martingales | Martingales]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Hashing, limited independence | Hashing, limited independence]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Fingerprinting|Fingerprinting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/The probabilistic method | The probabilistic method]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Markov chains and random walks | Markov chains and random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Expander graphs | Expander graphs]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Random sampling | Random sampling, MCMC]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Approximate counting|Approximate counting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Randomized approximation algorithms|Randomized approximation algorithms]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Distributed algorithms, data streams|Distributed algorithms, data streams]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
	<entry>
		<id>https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4434</id>
		<title>随机算法 (Fall 2011)</title>
		<link rel="alternate" type="text/html" href="https://tcs.nju.edu.cn/wiki/index.php?title=%E9%9A%8F%E6%9C%BA%E7%AE%97%E6%B3%95_(Fall_2011)&amp;diff=4434"/>
		<updated>2011-07-18T13:02:28Z</updated>

		<summary type="html">&lt;p&gt;114.212.208.2: /* Future plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Lecture Notes =&lt;br /&gt;
# [[随机算法 (Fall 2011)/Introduction|Introduction]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Complexity classes and lower bounds|Complexity classes, lower bounds]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Balls and bins|Balls and bins]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Tail inequalities|Tail inequalities]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Chernoff bounds| Chernoff bounds]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Martingales | Martingales]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Hashing, limited independence | Hashing, limited independence]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Fingerprinting|Fingerprinting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/The probabilistic method | The probabilistic method]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Markov chains and random walks | Markov chains and random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Expander graphs | Expander graphs]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Rapid mixing random walks | Rapid mixing random walks]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Random sampling | Random sampling, MCMC]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Approximate counting|Approximate counting]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Randomized approximation algorithms|Randomized approximation algorithms]]&lt;br /&gt;
# [[随机算法 (Fall 2011)/Distributed algorithms, data streams|Distributed algorithms, data streams]]&lt;/div&gt;</summary>
		<author><name>114.212.208.2</name></author>
	</entry>
</feed>