随机算法 (Spring 2014)/Random Variables and 随机算法 (Spring 2014)/Concentration of Measure: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Etone
 
Line 1: Line 1:
=Random Variable=
=The Doob martingales =
{{Theorem|Definition (random variable)|
The following definition describes a very general approach for constructing an important type of martingales.
:A random variable <math>X</math> on a sample space <math>\Omega</math> is a real-valued function <math>X:\Omega\rightarrow\mathbb{R}</math>. A random variable X is called a '''discrete''' random variable if its range is finite or countably infinite.
}}
 
For a random variable <math>X</math> and a real value <math>x\in\mathbb{R}</math>, we write "<math>X=x</math>" for the event <math>\{a\in\Omega\mid X(a)=x\}</math>, and denote the probability of the event by
:<math>\Pr[X=x]=\Pr(\{a\in\Omega\mid X(a)=x\})</math>.
 
The independence can also be defined for variables:
{{Theorem
|Definition (Independent variables)|
:Two random variables <math>X</math> and <math>Y</math> are '''independent''' if and only if
::<math>
\Pr[(X=x)\wedge(Y=y)]=\Pr[X=x]\cdot\Pr[Y=y]
</math>
:for all values <math>x</math> and <math>y</math>. Random variables <math>X_1, X_2, \ldots, X_n</math> are '''mutually independent''' if and only if, for any subset <math>I\subseteq\{1,2,\ldots,n\}</math> and any values <math>x_i</math>, where <math>i\in I</math>,
::<math>\begin{align}
\Pr\left[\bigwedge_{i\in I}(X_i=x_i)\right]
&=
\prod_{i\in I}\Pr[X_i=x_i].
\end{align}</math>
}}
 
Note that in probability theory, the "mutual independence" is <font color="red">not</font> equivalent with "pair-wise independence", which we will learn in the future.
 
== Expectation ==
Let <math>X</math> be a discrete '''random variable'''.  The expectation of <math>X</math> is defined as follows.
{{Theorem
|Definition (Expectation)|
:The '''expectation''' of a discrete random variable <math>X</math>, denoted by <math>\mathbf{E}[X]</math>, is given by
::<math>\begin{align}
\mathbf{E}[X] &= \sum_{x}x\Pr[X=x],
\end{align}</math>
:where the summation is over all values <math>x</math> in the range of <math>X</math>.
}}
 
== Linearity of Expectation ==
Perhaps the most useful property of expectation is its '''linearity'''.
 
{{Theorem
|Theorem (Linearity of Expectations)|
:For any discrete random variables <math>X_1, X_2, \ldots, X_n</math>, and any real constants <math>a_1, a_2, \ldots, a_n</math>,
::<math>\begin{align}
\mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i].
\end{align}</math>
}}
{{Proof| By the definition of the expectations, it is easy to verify that (try to prove by yourself):
for any discrete random variables <math>X</math> and <math>Y</math>, and any real constant <math>c</math>,
* <math>\mathbf{E}[X+Y]=\mathbf{E}[X]+\mathbf{E}[Y]</math>;
* <math>\mathbf{E}[cX]=c\mathbf{E}[X]</math>.
The theorem follows by induction.
}}
The linearity of expectation gives an easy way to compute the expectation of a random variable if the variable can be written as a sum.
 
;Example
: Supposed that we have a biased coin that the probability of HEADs is <math>p</math>. Flipping the coin for n times, what is the expectation of number of HEADs?
: It looks straightforward that it must be np, but how can we prove it? Surely we can apply the definition of expectation to compute the expectation with brute force. A more convenient way is by the linearity of expectations: Let <math>X_i</math> indicate whether the <math>i</math>-th flip is HEADs. Then <math>\mathbf{E}[X_i]=1\cdot p+0\cdot(1-p)=p</math>, and the total number of HEADs after n flips is <math>X=\sum_{i=1}^{n}X_i</math>. Applying the linearity of expectation, the expected number of HEADs is:
::<math>\mathbf{E}[X]=\mathbf{E}\left[\sum_{i=1}^{n}X_i\right]=\sum_{i=1}^{n}\mathbf{E}[X_i]=np</math>.
 
The real power of the linearity of expectations is that it does not require the random variables to be independent, thus can be applied to any set of random variables. For example:
:<math>\mathbf{E}\left[\alpha X+\beta X^2+\gamma X^3\right] = \alpha\cdot\mathbf{E}[X]+\beta\cdot\mathbf{E}\left[X^2\right]+\gamma\cdot\mathbf{E}\left[X^3\right].</math>


However, do not exaggerate this power!
* For an arbitrary function <math>f</math> (not necessarily linear), the equation <math>\mathbf{E}[f(X)]=f(\mathbf{E}[X])</math> does <font color="red">not</font> hold generally.
* For variances, the equation <math>var(X+Y)=var(X)+var(Y)</math> does <font color="red">not</font> hold without further assumption of the independence of <math>X</math> and <math>Y</math>.
==Conditional Expectation ==
Conditional expectation can be accordingly defined:
{{Theorem
{{Theorem
|Definition (conditional expectation)|
|Definition (The Doob sequence)|
:For random variables <math>X</math> and <math>Y</math>,
: The Doob sequence of a function <math>f</math> with respect to a sequence of random variables <math>X_1,\ldots,X_n</math> is defined by
::<math>
::<math>
\mathbf{E}[X\mid Y=y]=\sum_{x}x\Pr[X=x\mid Y=y],
Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n.
</math>
</math>
:where the summation is taken over the range of <math>X</math>.
:In particular, <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and <math>Y_n=f(X_1,\ldots,X_n)</math>.
}}
}}


There is also a '''law of total expectation'''.
The Doob sequence of a function defines a martingale. That is
{{Theorem
|Theorem (law of total expectation)|
:Let <math>X</math> and <math>Y</math> be two random variables. Then
::<math>
::<math>
\mathbf{E}[X]=\sum_{y}\mathbf{E}[X\mid Y=y]\cdot\Pr[Y=y].
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1},
</math>
</math>
}}
for any <math>0\le i\le n</math>.


= Distributions of Coin Flips =
To prove this claim, we recall the definition that <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]</math>, thus,
We introduce several important probability distributions induced by independent coin flips (independent trials), including: Bernoulli trial, geometric distribution, binomial distribution.
 
==Bernoulli trial (Bernoulli distribution)==
Bernoulli trial describes the probability distribution of a single (biased) coin flip. Suppose that we flip a (biased) coin where the probability of HEADS is <math>p</math>. Let <math>X</math> be the 0-1 random variable which indicates whether the result is HEADS. We say that <math>X</math> follows the Bernoulli distribution with parameter <math>p</math>. Formally,
:<math>\begin{align}
X
&=
\begin{cases}
1 & \text{with probability }p\\
0 & \text{with probability }1-p
\end{cases}
\end{align}</math>.
 
==Geometric distribution==
Suppose we flip the same coin repeatedly until HEADS appears, where each coin flip is independent and follows the Bernoulli distribution with parameter <math>p</math>. Let <math>X</math> be the random variable denoting the total number of coin flips. Then <math>X</math> has the geometric distribution with parameter <math>p</math>. Formally, <math>\Pr[X=k]=(1-p)^{k-1}p</math>.
 
For geometric <math>X</math>, <math>\mathbf{E}[X]=\frac{1}{p}</math>. This can be verified by directly computing <math>\mathbf{E}[X]</math> by the definition of expectations. There is also a smarter way of computing  <math>\mathbf{E}[X]</math>, by using indicators and the linearity of expectations. For <math>k=0, 1, 2, \ldots</math>, let <math>Y_k</math> be the 0-1 random variable such that <math>Y_k=1</math> if and only if none of the first <math>k</math> coin flipings are HEADS, thus <math>\mathbf{E}[Y_k]=\Pr[Y_k=1]=(1-p)^{k}</math>. A key observation is that <math>X=\sum_{k=0}^\infty Y_k</math>. Thus, due to the linearity of expectations,
:<math>
:<math>
\begin{align}
\begin{align}
\mathbf{E}[X]
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]
=
&=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\
\mathbf{E}\left[\sum_{k=0}^\infty Y_k\right]
&=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\
=
&=Y_{i-1},
\sum_{k=0}^\infty \mathbf{E}[Y_k]
=
\sum_{k=0}^\infty (1-p)^k
=
\frac{1}{1-(1-p)}
=\frac{1}{p}.
\end{align}
\end{align}
</math>
</math>
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.


==Binomial distribution==
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function <math>f(X_1,\ldots,X_n)</math> of random variables <math>X_1,\ldots,X_n</math>. The Doob sequence <math>Y_0,Y_1,\ldots,Y_n</math> represents a sequence of refined estimates of the value of <math>f(X_1,\ldots,X_n)</math>, gradually using more information on the values of the random variables <math>X_1,\ldots,X_n</math>. The first element <math>Y_0</math> is just the expectation of <math>f(X_1,\ldots,X_n)</math>. Element <math>Y_i</math> is the expected value of <math>f(X_1,\ldots,X_n)</math> when the values of <math>X_1,\ldots,X_{i}</math> are known, and <math>Y_n=f(X_1,\ldots,X_n)</math> when <math>f(X_1,\ldots,X_n)</math> is fully determined by <math>X_1,\ldots,X_n</math>.
Suppose we flip the same (biased) coin for <math>n</math> times, where each coin flip is independent and follows the Bernoulli distribution with parameter <math>p</math>. Let <math>X</math> be the number of HEADS. Then <math>X</math> has the binomial distribution with parameters <math>n</math> and <math>p</math>. Formally, <math>\Pr[X=k]={n\choose k}p^k(1-p)^{n-k}</math>.


A binomial random variable <math>X</math> with parameters <math>n</math> and <math>p</math> is usually denoted by <math>B(n,p)</math>.
The following two Doob martingales arise in evaluating the parameters of random graphs.  


As we saw above, by applying the linearity of expectations, it is easy to show that <math>\mathbf{E}[X]=np</math> for an <math>X=B(n,p)</math>.
;edge exposure martingale
 
:Let <math>G</math> be a random graph on <math>n</math> vertices. Let <math>f</math> be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that <math>m={n\choose 2}</math>. Fix an arbitrary numbering of potential edges between the <math>n</math> vertices, and denote the edges as <math>e_1,\ldots,e_m</math>. Let
=Balls into Bins=
::<math>
Consider throwing <math>m</math> balls into <math>n</math> bins uniformly and independently at random. This is equivalent to a random mapping <math>f:[m]\to[n]</math>. Needless to say, random mapping is an important random model and may have many applications in Computer Science, e.g. hashing.
X_i=\begin{cases}
 
1& \mbox{if }e_i\in G,\\
We are concerned with the following three questions regarding the balls into bins model:
0& \mbox{otherwise}.
* birthday problem: the probability that every bin contains at most one ball (the mapping is 1-1);
\end{cases}
* coupon collector problem: the probability that every bin contains at least one ball (the mapping is on-to);
* occupancy problem: the maximum load of bins.
 
== Birthday Problem==
There are <math>m</math> students in the class. Assume that for each student, his/her birthday is uniformly and independently distributed over the 365 days in a years. We wonder what the probability that no two students share a birthday.
 
Due to the [http://en.wikipedia.org/wiki/Pigeonhole_principle pigeonhole principle], it is obvious that for <math>m>365</math>, there must be two students with the same birthday. Surprisingly, for any <math>m>57</math> this event occurs with more than 99% probability. This is called the [http://en.wikipedia.org/wiki/Birthday_problem '''birthday paradox''']. Despite the name, the birthday paradox is not a real paradox.
 
We can model this problem as a balls-into-bins problem. <math>m</math> different balls (students) are uniformly and independently thrown into 365 bins (days). More generally, let <math>n</math> be the number of bins. We ask for the probability of the following event <math>\mathcal{E}</math>
 
* <math>\mathcal{E}</math>: there is no bin with more than one balls (i.e. no two students share birthday).
 
We first analyze this by counting. There are totally <math>n^m</math> ways of assigning <math>m</math> balls to <math>n</math> bins. The number of assignments that no two balls share a bin is <math>{n\choose m}m!</math>.
 
Thus the probability is given by:
:<math>\begin{align}
\Pr[\mathcal{E}]
=
\frac{{n\choose m}m!}{n^m}.
\end{align}
</math>
</math>
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,m</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''edge exposure martingale'''.


Recall that <math>{n\choose m}=\frac{n!}{(n-m)!m!}</math>. Then
;vertex exposure martingale
:<math>\begin{align}
: Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is <math>[n]</math>. Let <math>X_i</math> be the subgraph of <math>G</math> induced by the vertex set <math>[i]</math>, i.e. the first <math>i</math> vertices.
\Pr[\mathcal{E}]
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,n</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
=
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''vertex exposure martingale'''.
\frac{{n\choose m}m!}{n^m}
=
\frac{n!}{n^m(n-m)!}
=
\frac{n}{n}\cdot\frac{n-1}{n}\cdot\frac{n-2}{n}\cdots\frac{n-(m-1)}{n}
=
\prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right).
\end{align}
</math>


There is also a more "probabilistic" argument for the above equation. To be rigorous, we need the following theorem, which holds generally and is very useful for computing the AND of many events.
===Chromatic number===
The random graph <math>G(n,p)</math> is the graph on <math>n</math> vertices <math>[n]</math>, obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability <math>p</math>. We denote <math>G\sim G(n,p)</math> if <math>G</math> is generated in this way.


:::{|border="1"
{{Theorem
|By the definition of conditional probability, <math>\Pr[A\mid B]=\frac{\Pr[A\wedge B]}{\Pr[B]}</math>. Thus, <math>\Pr[A\wedge B] =\Pr[B]\cdot\Pr[A\mid B]</math>. This hints us that we can compute the probability of the AND of events by conditional probabilities. Formally, we have the following theorem:
|Theorem [Shamir and Spencer (1987)]|
'''Theorem:'''
:Let <math>G\sim G(n,p)</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Then
:Let <math>\mathcal{E}_1, \mathcal{E}_2, \ldots, \mathcal{E}_n</math> be any <math>n</math> events. Then
::<math>\begin{align}
::<math>\begin{align}
\Pr\left[\bigwedge_{i=1}^n\mathcal{E}_i\right]
\Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}.
&=
\prod_{k=1}^n\Pr\left[\mathcal{E}_k \mid \bigwedge_{i<k}\mathcal{E}_i\right].
\end{align}</math>
\end{align}</math>
'''Proof:'''  It holds that <math>\Pr[A\wedge B] =\Pr[B]\cdot\Pr[A\mid B]</math>. Thus, let <math>A=\mathcal{E}_n</math> and <math>B=\mathcal{E}_1\wedge\mathcal{E}_2\wedge\cdots\wedge\mathcal{E}_{n-1}</math>, then
}}
:<math>\begin{align}
{{Proof| Consider the vertex exposure martingale
\Pr[\mathcal{E}_1\wedge\mathcal{E}_2\wedge\cdots\wedge\mathcal{E}_n]
:<math>
&=
Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i]
\Pr[\mathcal{E}_1\wedge\mathcal{E}_2\wedge\cdots\wedge\mathcal{E}_{n-1}]\cdot\Pr\left[\mathcal{E}_n\mid \bigwedge_{i<n}\mathcal{E}_i\right].
\end{align}
</math>
</math>
Recursively applying this equation to <math>\Pr[\mathcal{E}_1\wedge\mathcal{E}_2\wedge\cdots\wedge\mathcal{E}_{n-1}]</math> until there is only <math>\mathcal{E}_1</math> left, the theorem is proved. <math>\square</math>
where each <math>X_k</math> exposes the induced subgraph of <math>G</math> on vertex set <math>[k]</math>. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition
|}
:<math>
 
|Y_i-Y_{i-1}|\le 1
Now we are back to the probabilistic analysis of the birthday problem, with a general setting of <math>m</math> students and <math>n</math> possible birthdays (imagine that we live in a planet where a year has <math>n</math> days).
 
The first student has a birthday (of course!). The probability that the second student has a different birthday is <math>\left(1-\frac{1}{n}\right)</math>. Given that the first two students have different birthdays, the probability that the third student has a different birthday from the first two is <math>\left(1-\frac{2}{n}\right)</math>. Continuing this on, assuming that the first <math>k-1</math> students all have different birthdays, the probability that the <math>k</math>th student has a different birthday than the first <math>k-1</math>, is given by <math>\left(1-\frac{k-1}{n}\right)</math>. So the probability that all <math>m</math> students have different birthdays is the product of all these conditional probabilities:
:<math>\begin{align}
\Pr[\mathcal{E}]=\left(1-\frac{1}{n}\right)\cdot \left(1-\frac{2}{n}\right)\cdots \left(1-\frac{m-1}{n}\right)
&=
\prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right),
\end{align}
</math>
</math>
which is the same as what we got by the counting argument.
is satisfied. Now apply the Azuma's inequality for the martingale <math>Y_1,\ldots,Y_n</math> with respect to <math>X_1,\ldots,X_n</math>.
 
}}
[[File:Birthday.png|border|450px|right]]
 
There are several ways of analyzing this formular. Here is a convenient one: Due to [http://en.wikipedia.org/wiki/Taylor_series Taylor's expansion], <math>e^{-k/n}\approx 1-k/n</math>. Then
:<math>\begin{align}
\prod_{k=1}^{m-1}\left(1-\frac{k}{n}\right)
&\approx
\prod_{k=1}^{m-1}e^{-\frac{k}{n}}\\
&=
\exp\left(-\sum_{k=1}^{m-1}\frac{k}{n}\right)\\
&=
e^{-m(m-1)/2n}\\
&\approx
e^{-m^2/2n}.
\end{align}</math>
The quality of this approximation is shown in the Figure.
 
Therefore, for <math>m=\sqrt{2n\ln \frac{1}{\epsilon}}</math>, the probability that <math>\Pr[\mathcal{E}]\approx\epsilon</math>.
 
==Coupon Collector ==
Suppose that a chocolate company releases <math>n</math> different types of coupons. Each box of chocolates contains one coupon with a uniformly random type. Once you have collected all <math>n</math> types of coupons, you will get a prize. So how many boxes of chocolates you are expected to buy to win the prize?


The coupon collector problem can be described in the balls-into-bins model as follows. We keep throwing balls one-by-one into <math>n</math> bins (coupons), such that each ball is thrown into a bin uniformly and independently at random. Each ball corresponds to a box of chocolate, and each bin corresponds to a type of coupon. Thus, the number of boxes bought to collect <math>n</math> coupons is just the number of balls thrown until none of the <math>n</math> bins is empty.
For <math>t=\omega(1)</math>, the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.  


=== Hoeffding's Inequality===
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent ''trials''. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
{{Theorem
{{Theorem
|Theorem|
|Hoeffding's inequality|
:Let <math>X</math> be the number of balls thrown uniformly and independently to <math>n</math> bins until no bin is empty. Then <math>\mathbf{E}[X]=nH(n)</math>, where <math>H(n)</math> is the <math>n</math>th harmonic number.
: Let <math>X=\sum_{i=1}^nX_i</math>, where <math>X_1,\ldots,X_n</math> are independent random variables with <math>a_i\le X_i\le b_i</math> for each <math>1\le i\le n</math>. Let <math>\mu=\mathbf{E}[X]</math>. Then
::<math>
\Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right).
</math>
}}
}}
{{Proof| Let <math>X_i</math> be the number of balls thrown while there are ''exactly'' <math>i-1</math> nonempty bins, then clearly <math>X=\sum_{i=1}^n X_i</math>.
{{Proof| Define the Doob martingale sequence <math>Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right]</math>. Obviously <math>Y_0=\mu</math> and <math>Y_n=X</math>.


When there are exactly <math>i-1</math> nonempty bins, throwing a ball, the probability that the number of nonempty bins increases  (i.e. the ball is thrown to an empty bin) is
:<math>p_i=1-\frac{i-1}{n}.
</math>
<math>X_i</math> is the number of balls thrown to make the number of nonempty bins increases from <math>i-1</math> to <math>i</math>, i.e. the number of balls thrown until a ball is thrown to a current empty bin. Thus, <math>X_i</math> follows the [http://en.wikipedia.org/wiki/Geometric_distribution geometric distribution], such that
:<math>\Pr[X_i=k]=(1-p_i)^{k-1}p_i</math>
For a geometric random variable, <math>\mathbf{E}[X_i]=\frac{1}{p_i}=\frac{n}{n-i+1}</math>.
Applying the linearity of expectations,
:<math>
:<math>
\begin{align}
\begin{align}
\mathbf{E}[X]
|Y_i-Y_{i-1}|
&=
&=
\mathbf{E}\left[\sum_{i=1}^nX_i\right]\\
\left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\
&=
&=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\
\sum_{i=1}^n\mathbf{E}\left[X_i\right]\\
&=\left|X_i-\mathbf{E}[X_{i}]\right|\\
&=
&\le b_i-a_i
\sum_{i=1}^n\frac{n}{n-i+1}\\
&=
n\sum_{i=1}^n\frac{1}{i}\\
&=
nH(n),
\end{align}
\end{align}
</math>
</math>
where <math>H(n)</math> is the <math>n</math>th Harmonic number, and <math>H(n)=\ln n+O(1)</math>. Thus, for the coupon collectors problem, the expected number of coupons required to obtain all <math>n</math> types of coupons is <math>n\ln n+O(n)</math>.
Apply Azuma's inequality for the martingale <math>Y_0,\ldots,Y_n</math> with respect to <math>X_1,\ldots, X_n</math>, the Hoeffding's inequality is proved.
}}
}}


----


Only knowing the expectation is not good enough. We would like to know how fast the probability decrease as a random variable deviates from its mean value.
=The Bounded Difference Method=
Combining Azuma's inequality with the construction of Doob martingales, we have the powerful ''Bounded Difference Method'' for concentration of measures.
 
== For arbitrary random variables ==
Given a sequence of random variables <math>X_1,\ldots,X_n</math> and a function <math>f</math>.  The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).


{{Theorem
{{Theorem
|Theorem|  
|Theorem (Method of averaged bounded differences)|
:Let <math>X</math> be the number of balls thrown uniformly and independently to <math>n</math> bins until no bin is empty. Then <math>\Pr[X\ge n\ln n+cn]<e^{-c}</math> for any <math>c>0</math>.
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be arbitrary random variables and let <math>f</math> be a function of <math>X_0,X_1,\ldots, X_n</math> satisfying that, for all <math>1\le i\le n</math>,
}}
::<math>
{{Proof| For any particular bin <math>i</math>, the probability that bin <math>i</math> is empty after throwing <math>n\ln n+cn</math> balls is
|\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i,
:<math>\left(1-\frac{1}{n}\right)^{n\ln n+cn}
< e^{-(\ln n+c)}
=\frac{1}{ne^c}.
</math>
</math>
 
:Then
By the union bound, the probability that there exists an empty bin after throwing <math>n\ln n+cn</math> balls is
::<math>\begin{align}
:<math>
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\Pr[X\ge n\ln n+cn]
\end{align}</math>
< n\cdot \frac{1}{ne^c}
=e^{-c}.
</math>
}}
 
=== Stable Marriage ===
We now consider the famous [http://en.wikipedia.org/wiki/Stable_marriage_problem '''stable marriage problem'''] or '''stable matching problem''' (SMP). This problem captures two aspects: allocations (matchings) and stability, two central topics in economics.
 
An instance of stable marriage consists of:
* <math>n</math> men and <math>n</math> women;
* each person associated with a strictly ordered ''preference list'' containing all the members of the opposite sex.
Formally, let <math>M</math> be the set of <math>n</math> men and <math>W</math> be the set of <math>n</math> women. Each man <math>m\in M</math> is associated with a permutation <math>p_m</math> of elemets in <math>W</math> and each woman <math>w\in W</math> is associated with a permutation <math>p_w</math> of elements in <math>M</math>.
 
A ''matching'' is a one-one correspondence <math>\phi:M\rightarrow W</math>. We said a man <math>m</math> and a woman <math>w</math> are ''partners'' in <math>\phi</math> if <math>w=\phi(m)</math>.
{{Theorem|Definition (stable matching)|
:A pair <math>(m,w)</math> of a man and woman is a '''blocking pair''' in a matching <math>\phi</math> if <math>m</math> and <math>w</math> are not partners in <math>\phi</math> but
:* <math>m</math> prefers <math>w</math> to <math>\phi(m)</math>, and
:* <math>w</math> prefers <math>m</math> to <math>\phi(w)</math>.
:A matching <math>\phi</math> is '''stable''' if there is no blocking pair in it.
}}
}}
 
{{Proof| Define the Doob Martingale sequence <math>Y_0,Y_1,\ldots,Y_n</math> by setting <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and, for <math>1\le i\le n</math>, <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i]</math>. Then the above theorem is a restatement of the Azuma's inequality holding for <math>Y_0,Y_1,\ldots,Y_n</math>.
It is unclear from the definition itself whether stable matchings always exist, and how to efficiently find a stable matching. Both questions are answered by the following proposal algorithm due to Gale and Shapley.
{{Theorem|The proposal algorithm (Gale-Shapley 1962)|
: Initially, all person are not married;
: in each step (called a '''proposal'''):
:* an arbitrary unmarried man <math>m</math> proposes to the woman <math>w</math> who is ranked highest in his preference list <math>p_m</math> among all the women who has not yet rejected <math>m</math>;
:* if <math>w</math> is still single then <math>w</math> accepts the proposal and is married to <math>m</math>;
:* if <math>w</math> is married to another man <math>m'</math> who is ranked lower than <math>m</math> in her preference list <math>p_w</math> then <math>w</math> divorces <math>m'</math> (thus <math>m'</math> becomes single again and considers himself as rejected by <math>w</math>) and is married to <math>m</math>;
:* if otherwise <math>w</math> rejects <math>m</math>;
}}
}}


The algorithm terminates when the last single woman receives a proposal. Since for every pair <math>(m,w)\in M\times W</math> of man and woman, <math>m</math> proposes to <math>w</math> at most once.
== For independent random variables ==
The algorithm terminates in at most <math>n^2</math> proposals in the worst case.  
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.


It is obvious to see that the algorithm retruns a macthing, and this matching must be stable. To see this, by contradiction suppose that the algorithm resturns a macthing <math>\phi</math>, such that two men <math>A, B</math> are macthed to two women <math>a,b</math> in <math>\phi</math> respectively, but <math>A</math> and <math>b</math> prefers each other to their partners <math>a</math> and <math>B</math> respectively. By definition of the algorithm, <math>A</math> would have proposed to <math>b</math> before proposing to <math>a</math>, by which time <math>b</math> must either be single or be matched to a man ranked lower than <math>A</math> in her list (because her final partner <math>B</math> is ranked lower than <math>A</math>), which means <math>b</math> must have accepted <math>A</math>'s proposal, a contradiction.
{{Theorem
 
|Definition (Lipschitz condition)|
 
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
We are interested in the average-case performance of this algorithm, that is, the expected number of proposals if everyone's preference list is a uniformly and independently random permutation.
::<math>\begin{align}
 
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1.
The following '''principle of deferred decisions''' is quite useful in analysing performance of algorithm with random input.  
\end{align}</math>
{{Theorem|Principle of deferred decisions|
:The decision of random choice in the random input can be deferred to the running time of the algorithm.
}}
}}
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.


Apply the principle of deferred decisions, the deterministic proposal algorithm with random permutations as input is equivalent to the following random process:
The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
* At each step, a man <math>m</math> choose a woman <math>w</math> uniformly and independently at random to propose, among all the women who have not rejected him yet. ('''sample without replacement''')
{{Theorem
 
|Definition (Lipschitz condition, general version)|
We then compare the above process with the following modified process:
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
* The man <math>m</math> repeatedly samples a uniform and independent woman to propose among all women, until he successfully samples a woman who has not rejected him and propose to her. ('''sample with replacement''')
::<math>\begin{align}
 
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i.
It is easy to see that the modified process (sample with replacement) is no more efficient than the original process (sample without replacement) because it simulates the original process if at each step we only count the last proposal to the woman who has not rejected the man. Such comparison of two random processes by forcing them to be related in some way is called [http://en.wikipedia.org/wiki/Coupling_(probability) coupling].
 
Note that in the modified process (sample with replacement), each proposal, no matter from which man, is going to a uniformly and independently random women. And we know that the algorithm terminated once the last single woman receives a proposal, i.e. once all <math>n</math> women have received at least one proposal. This is the coupon collector problem with proposals as balls (cookie boxes) and women as bins (coupons).
Due to our analysis of the coupon collector problem, the expected number of proposals is bounded by <math>O(n\ln n)</math>.
 
== Occupancy Problem ==
Now we ask about the loads of bins. Assuming that <math>m</math> balls are uniformly and independently assigned to <math>n</math> bins, for <math>1\le i\le n</math>, let <math>X_i</math> be the '''load''' of the <math>i</math>th bin, i.e. the number of balls in the <math>i</math>th bin.
 
An easy analysis shows that for every bin <math>i</math>, the expected load <math>\mathbf{E}[X_i]</math> is equal to the average load <math>m/n</math>.
 
Because there are totally <math>m</math> balls, it is always true that <math>\sum_{i=1}^n X_i=m</math>.
 
Therefore, due to the linearity of expectations,
:<math>\begin{align}
\sum_{i=1}^n\mathbf{E}[X_i]
&=
\mathbf{E}\left[\sum_{i=1}^n X_i\right]
=
\mathbf{E}\left[m\right]
=m.
\end{align}</math>
\end{align}</math>
}}


Because for each ball, the bin to which the ball is assigned is uniformly and independently chosen, the distributions of the loads of bins are identical. Thus <math>\mathbf{E}[X_i]</math> is the same for each <math>i</math>. Combining with the above equation, it holds that for every <math>1\le i\le m</math>, <math>\mathbf{E}[X_i]=\frac{m}{n}</math>. So the average is indeed the average!
The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
 
----
 
Next we analyze the distribution of the maximum load. We show that when <math>m=n</math>, i.e. <math>n</math> balls are uniformly and independently thrown into <math>n</math> bins, the maximum load is <math>O\left(\frac{\log n}{\log\log n}\right)</math> with high probability.
 
{{Theorem
{{Theorem
|Theorem|
|Corollary (Method of bounded differences)|
:Suppose that <math>n</math> balls are thrown independently and uniformly at random into <math>n</math> bins. For <math>1\le i\le n</math>, let <math>X_i</math> be the random variable denoting the number of balls in the <math>i</math>th bin. Then
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be <math>n</math> '''independent''' random variables and let <math>f</math> be a function satisfying the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>. Then
::<math>\Pr\left[\max_{1\le i\le n}X_i \ge\frac{3\ln n}{\ln\ln n}\right] <\frac{1}{n}.</math>
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
}}


{{Proof| Let <math>M</math> be an integer. Take bin 1. For any particular <math>M</math> balls, these <math>M</math> balls are all thrown to bin 1 with probability <math>(1/n)^M</math>, and there are totally <math>{n\choose M}</math> distinct sets of <math>M</math> balls. Therefore, applying the union bound,
{{Proof| For convenience, we denote that <math>\boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j)</math> for any <math>1\le i\le j\le n</math>.
:<math>\begin{align}\Pr\left[X_1\ge M\right]
&\le
{n\choose M}\left(\frac{1}{n}\right)^M\\
&=
\frac{n!}{M!(n-M)!n^M}\\
&=
\frac{1}{M!}\cdot\frac{n(n-1)(n-2)\cdots(n-M+1)}{n^M}\\
&=
\frac{1}{M!}\cdot \prod_{i=0}^{M-1}\left(1-\frac{i}{n}\right)\\
&\le \frac{1}{M!}.
\end{align}</math>


According to [http://en.wikipedia.org/wiki/Stirling's_approximation Stirling's approximation], <math>M!\approx \sqrt{2\pi M}\left(\frac{M}{e}\right)^M</math>, thus
We first show that the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, implies another condition called the averaged Lipschitz condition (ALC): for any <math>a_i,b_i</math>, <math>1\le i\le n</math>,
:<math>\frac{1}{M!}\le\left(\frac{e}{M}\right)^M.</math>
:<math>
 
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i.
[[file:Balls2bins.png|frame|Figure 1]]
</math>
 
And this condition implies the averaged bounded difference condition: for all <math>1\le i\le n</math>,
Due to the symmetry. All <math>X_i</math> have the same distribution.
::<math>
Apply the union bound again,  
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i.
:<math>\begin{align}
\Pr\left[\max_{1\le i\le n}X_i\ge M\right]
&=
\Pr\left[(X_1\ge M) \vee (X_2\ge M) \vee\cdots\vee (X_n\ge M)\right]\\
&\le
n\Pr[X_1\ge M]\\
&\le n\left(\frac{e}{M}\right)^M.
\end{align}
</math>
</math>
Then by applying the method of averaged bounded differences, the corollary can be proved.


When <math>M=3\ln n/\ln\ln n</math>,
For any <math>a</math>, by the law of total expectation,
:<math>\begin{align}
:<math>
\left(\frac{e}{M}\right)^M
\begin{align}
&=
&\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
\left(\frac{e\ln\ln n}{3\ln n}\right)^{3\ln n/\ln\ln n}\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&<
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\
\left(\frac{\ln\ln n}{\ln n}\right)^{3\ln n/\ln\ln n}\\
&= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right].
&=
e^{3(\ln\ln\ln n-\ln\ln n)\ln n/\ln\ln n}\\
&=
e^{-3\ln n+3\ln\ln\ln n\ln n/\ln\ln n}\\
&\le
e^{-2\ln n}\\
&=
\frac{1}{n^2}.
\end{align}
\end{align}
</math>
</math>


Therefore,
Let <math>a=a_i</math> and <math>b_i</math>, and take the diference. Then
:<math>\begin{align}
:<math>
\Pr\left[\max_{1\le i\le n}X_i\ge \frac{3\ln n}{\ln\ln n}\right]
\begin{align}
&\le n\left(\frac{e}{M}\right)^M
&\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\
&< \frac{1}{n}.
&=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\
&\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\
&\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\
&=c_i.
\end{align}
\end{align}
</math>
</math>
}}


When <math>m>n</math>, Figure 1 illustrates the results of several random experiments, which show that the distribution of the loads of bins becomes more even as the number of balls grows larger than the number of bins.
Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.


Formally, it can be proved that for <math>m=\Omega(n\log n)</math>, with high probability, the maximum load is within <math>O\left(\frac{m}{n}\right)</math>, which is asymptotically equal to the average load.
By the law of total expectation,
:<math>
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}].
</math>


=Random Quicksort=
We can trivially write <math>\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]</math> as
Given as input a set <math>S</math> of <math>n</math> numbers, we want to sort the numbers in <math>S</math> in increasing order. One of the most famous algorithm for this problem is the  [http://en.wikipedia.org/wiki/Quicksort Quicksort] algorithm.
:<math>
* if <math>|S|>1</math> do:
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right].
** pick an <math>x\in S</math> as the ''pivot'';
</math>
** partition <math>S</math> into <math>S_1</math>, <math>\{x\}</math>, and <math>S_2</math>, where all numbers in <math>S_1</math> are smaller than <math>x</math> and all numbers in <math>S_2</math> are  larger than <math>x</math>;
** recursively sort <math>S_1</math> and <math>S_2</math>;
 
The time complexity of this sorting algorithm is measured by the '''number of comparisons'''. 
 
For the '''deterministic''' quicksort algorithm, the pivot is picked from a fixed position (e.g. the first number in the array). The worst-case time complexity in terms of number of comparisons is <math>\Theta(n^2)</math>.
 
We consider the following randomized version of the quicksort.
* if <math>|S|>1</math> do:
** ''uniformly'' pick a ''random'' <math>x\in S</math> as the pivot;
** partition <math>S</math> into <math>S_1</math>, <math>\{x\}</math>, and <math>S_2</math>, where all numbers in <math>S_1</math> are smaller than <math>x</math> and all numbers in <math>S_2</math> are  larger than <math>x</math>;
** recursively sort <math>S_1</math> and <math>S_2</math>;
 
== Analysis of Random Quicksort==
Our goal is to analyze the expected number of comparisons during an execution of RandQSort with an arbitrary input <math>S</math>. We achieve this by measuring the chance that each pair of elements are compared, and summing all of them up due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation].
 
Let <math>a_i</math> denote the <math>i</math>th smallest element in <math>S</math>.
Let <math>X_{ij}\in\{0,1\}</math> be the random variable which indicates whether <math>a_i</math> and <math>a_j</math> are compared during the execution of RandQSort. That is:


Hence, the difference is
:<math>
:<math>
\begin{align}
\begin{align}
X_{ij} &=
&\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\
\begin{cases}
&=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\
1 & a_i\mbox{ and }a_j\mbox{ are compared}\\
&\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\
0 & \mbox{otherwise}
&\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\
\end{cases}.
&=c_i.
\end{align}
\end{align}
</math>
</math>


Elements <math>a_i</math> and <math>a_j</math> are compared only if one of them is chosen as pivot. After comparison they are separated (thus are never compared again). So we have the following observations:
The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
}}


'''Observation 1:  Every pair of <math>a_i</math> and <math>a_j</math> are compared at most once.'''
== Applications ==


Therefore the sum of <math>X_{ij}</math> for all pair <math>\{i, j\}</math> gives the total number of comparisons. The expected number of comparisons is <math>\mathbf{E}\left[\sum_{i=1}^n\sum_{j>i}X_{ij}\right]</math>. Due to [http://en.wikipedia.org/wiki/Expected_value#Linearity Linearity of Expectation], <math>\mathbf{E}\left[\sum_{i=1}^n\sum_{j>i}X_{ij}\right] = \sum_{i=1}^n\sum_{j>i}\mathbf{E}\left[X_{ij}\right]</math>.
=== Occupancy problem ===
Our next step is to analyze <math>\mathbf{E}\left[X_{ij}\right]</math> for each <math>\{i, j\}</math>.
Throwing <math>m</math> balls uniformly and independently at random to <math>n</math> bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.


By the definition of expectation and <math>X_{ij}</math>,
This problem can be described equivalently as follows. Let <math>f:[m]\rightarrow[n]</math> be a uniform random function from <math>[m]\rightarrow[n]</math>. We ask for the number of <math>i\in[n]</math> that <math>f^{-1}(i)</math> is empty.


:<math>\begin{align}
For any <math>i\in[n]</math>, let <math>X_i</math> indicate the emptiness of bin <math>i</math>. Let <math>X=\sum_{i=1}^nX_i</math> be the number of empty bins.
\mathbf{E}\left[X_{ij}\right]  
:<math>
&= 1\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are compared}] + 0\cdot \Pr[a_i\mbox{ and }a_j\mbox{ are not compared}]\\
\mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m.
&= \Pr[a_i\mbox{ and }a_j\mbox{ are compared}].
</math>
\end{align}</math>
By the linearity of expectation,
:<math>
\mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m.
</math>


We are going to bound this probability.
We want to know how <math>X</math> deviates from this expectation. The complication here is that <math>X_i</math> are not independent. So we alternatively look at a sequence of independent random variables <math>Y_1,\ldots, Y_m</math>, where <math>Y_j\in[n]</math> represents the bin into which the <math>j</math>th ball falls. Clearly <math>X</math> is function of <math>Y_1,\ldots, Y_m</math>.


'''Observation 2: <math>a_i</math> and <math>a_j</math> are compared if and only if one of them is chosen as pivot when they are still in the same subset.'''
We than observe that changing the value of any <math>Y_i</math> can change the value of <math>X</math> by at most 1, because one ball can affect the emptiness of at most one bin.
Thus as a function of independent random variables <math>Y_1,\ldots, Y_m</math>, <math>X</math> satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that
:<math>
\Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2}
</math>


This is easy to verify: just check the algorithm. The next one is a bit complicated.
Thus, for sufficiently large <math>n</math> and <math>m</math>, the number of empty bins is tightly concentrated around <math>n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}}</math>


'''Observation 3: If <math>a_i</math> and <math>a_j</math> are still in the same subset then all <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> are in the same subset.'''
=== Pattern Matching ===
Let <math>\boldsymbol{X}=(X_1,\ldots,X_n)</math> be a sequence of characters chosen independently and uniformly at random from an alphabet <math>\Sigma</math>, where <math>m=|\Sigma|</math>. Let <math>\pi\in\Sigma^k</math> be an arbitrarily fixed string of <math>k</math> characters from <math>\Sigma</math>, called a ''pattern''. Let <math>Y</math> be the number of occurrences of the pattern <math>\pi</math> as a substring of the random string <math>X</math>.


We can verify this by induction. Initially, <math>S</math> itself has the property described above; and partitioning any <math>S</math> with the property into <math>S_1</math> and <math>S_2</math> will preserve the property for both <math>S_1</math> and <math>S_2</math>. Therefore Claim 3 holds.
By the linearity of expectation, it is obvious that
:<math>
\mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k.
</math>


Combining Observation 2 and 3, we have:
We now look at the concentration of <math>Y</math>. The complication again lies in the dependencies between the matches. Yet we will see that <math>Y</math> is well tightly concentrated around its expectation if <math>k</math> is relatively small compared to <math>n</math>.


'''Observation 4: <math>a_i</math> and <math>a_j</math> are compared only if one of <math>\{a_i, a_j\}</math> is chosen from <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math>.'''
For a fixed pattern <math>\pi</math>, the random variable <math>Y</math> is a function of the independent random variables <math>(X_1,\ldots,X_n)</math>. Any character <math>X_i</math> participates in no more than <math>k</math> matches, thus changing the value of any <math>X_i</math> can affect the value of <math>Y</math> by at most <math>k</math>. <math>Y</math> satisfies the Lipschitz condition with constant <math>k</math>. Apply the method of bounded differences,
:<math>
\Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge  tk\sqrt{n}\right]\le 2e^{-t^2/2}
</math>


And,
=== Combining unit vectors ===
Let <math>u_1,\ldots,u_n</math> be <math>n</math> unit vectors from some normed space. That is, <math>\|u_i\|=1</math> for any <math>1\le i\le n</math>, where <math>\|\cdot\|</math> denote the vector norm (e.g. <math>\ell_1,\ell_2,\ell_\infty</math>) of the space.


'''Observation 5: Every one of <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> is chosen equal-probably.'''
Let <math>\epsilon_1,\ldots,\epsilon_n\in\{-1,+1\}</math> be independently chosen and <math>\Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2</math>.  


This is because the Random Quicksort chooses the pivot ''uniformly at random''.
Let
:<math>v=\epsilon_1u_1+\cdots+\epsilon_nu_n,
</math>
and
:<math>
X=\|v\|.
</math>
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable <math>X</math> is well concentrated around its mean.


Observation 4 and 5 together imply:
<math>X</math> is a function of independent random variables <math>\epsilon_1,\ldots,\epsilon_n</math>.
 
By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector <math>u_i</math> can only change the value of <math>X</math> for at most 2, thus <math>X</math> satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:
:<math>\begin{align}
:<math>
\Pr[a_i\mbox{ and }a_j\mbox{ are compared}]
\Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}.
&\le \frac{2}{j-i+1}.
</math>
\end{align}</math>
 
{|border="1"
|'''Remark:''' Perhaps you feel confused about the above argument. You may ask: "''The algorithm chooses pivots for many times during the execution. Why in the above argument, it looks like the pivot is chosen only once?''" Good question! Let's see what really happens by looking closely.
 
For any pair <math>a_i</math> and <math>a_j</math>, initially <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> are all in the same set <math>S</math> (obviously!). During the execution of the algorithm, the set which containing <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> are shrinking (due to the pivoting), until one of <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> is chosen, and the set is partitioned into different subsets. We ask for the probability that the chosen one is among <math>\{a_i, a_j\}</math>. So we really care about "the last" pivoting before <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> is split.
 
Formally, let <math>Y</math> be the random variable denoting the pivot element. We know that for each <math>a_k\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math>, <math>Y=a_k</math> with the same probability, and <math>Y\not\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math> with an unknown probability (remember that there might be other elements in the same subset with <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math>). The probability we are looking for is actually
<math>\Pr[Y\in \{a_i, a_j\}\mid Y\in\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}]</math>, which is always <math>\frac{2}{j-i+1}</math>, provided that <math>Y</math> is uniform over <math>\{a_i, a_{i+1}, \ldots, a_{j-1}, a_{j}\}</math>.
 
The '''conditional probability''' rules out the ''irrelevant'' events in a probabilistic argument.
|}
 
Summing all up:
 
:<math>\begin{align}
\mathbf{E}\left[\sum_{i=1}^n\sum_{j>i}X_{ij}\right]
&=
\sum_{i=1}^n\sum_{j>i}\mathbf{E}\left[X_{ij}\right]\\
&\le \sum_{i=1}^n\sum_{j>i}\frac{2}{j-i+1}\\
&= \sum_{i=1}^n\sum_{k=2}^{n-i+1}\frac{2}{k} & & (\mbox{Let }k=j-i+1)\\
&\le \sum_{i=1}^n\sum_{k=1}^{n}\frac{2}{k}\\
&= 2n\sum_{k=1}^{n}\frac{1}{k}\\
&= 2n H(n).
\end{align}</math>
 
<math>H(n)</math> is the <math>n</math>th [http://en.wikipedia.org/wiki/Harmonic_number Harmonic number]. It holds that
 
:<math>\begin{align}H(n) = \ln n+O(1)\end{align}</math>.
 
Therefore, for an arbitrary input <math>S</math> of <math>n</math> numbers, the expected number of comparisons taken by RandQSort to sort <math>S</math> is <math>\mathrm{O}(n\log n)</math>.

Revision as of 08:26, 31 March 2014

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function [math]\displaystyle{ f }[/math] with respect to a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] is defined by
[math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n. }[/math]
In particular, [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math].

The Doob sequence of a function defines a martingale. That is

[math]\displaystyle{ \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1}, }[/math]

for any [math]\displaystyle{ 0\le i\le n }[/math].

To prove this claim, we recall the definition that [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}] }[/math], thus,

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}] &=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\ &=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\ &=Y_{i-1}, \end{align} }[/math]

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The Doob sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] represents a sequence of refined estimates of the value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math], gradually using more information on the values of the random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The first element [math]\displaystyle{ Y_0 }[/math] is just the expectation of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math]. Element [math]\displaystyle{ Y_i }[/math] is the expected value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] when the values of [math]\displaystyle{ X_1,\ldots,X_{i} }[/math] are known, and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math] when [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] is fully determined by [math]\displaystyle{ X_1,\ldots,X_n }[/math].

The following two Doob martingales arise in evaluating the parameters of random graphs.

edge exposure martingale
Let [math]\displaystyle{ G }[/math] be a random graph on [math]\displaystyle{ n }[/math] vertices. Let [math]\displaystyle{ f }[/math] be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that [math]\displaystyle{ m={n\choose 2} }[/math]. Fix an arbitrary numbering of potential edges between the [math]\displaystyle{ n }[/math] vertices, and denote the edges as [math]\displaystyle{ e_1,\ldots,e_m }[/math]. Let
[math]\displaystyle{ X_i=\begin{cases} 1& \mbox{if }e_i\in G,\\ 0& \mbox{otherwise}. \end{cases} }[/math]
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,m }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the edge exposure martingale.
vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is [math]\displaystyle{ [n] }[/math]. Let [math]\displaystyle{ X_i }[/math] be the subgraph of [math]\displaystyle{ G }[/math] induced by the vertex set [math]\displaystyle{ [i] }[/math], i.e. the first [math]\displaystyle{ i }[/math] vertices.
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,n }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the vertex exposure martingale.

Chromatic number

The random graph [math]\displaystyle{ G(n,p) }[/math] is the graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ [n] }[/math], obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability [math]\displaystyle{ p }[/math]. We denote [math]\displaystyle{ G\sim G(n,p) }[/math] if [math]\displaystyle{ G }[/math] is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let [math]\displaystyle{ G\sim G(n,p) }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}. \end{align} }[/math]
Proof.
Consider the vertex exposure martingale
[math]\displaystyle{ Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i] }[/math]

where each [math]\displaystyle{ X_k }[/math] exposes the induced subgraph of [math]\displaystyle{ G }[/math] on vertex set [math]\displaystyle{ [k] }[/math]. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

[math]\displaystyle{ |Y_i-Y_{i-1}|\le 1 }[/math]

is satisfied. Now apply the Azuma's inequality for the martingale [math]\displaystyle{ Y_1,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots,X_n }[/math].

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ t=\omega(1) }[/math], the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math], where [math]\displaystyle{ X_1,\ldots,X_n }[/math] are independent random variables with [math]\displaystyle{ a_i\le X_i\le b_i }[/math] for each [math]\displaystyle{ 1\le i\le n }[/math]. Let [math]\displaystyle{ \mu=\mathbf{E}[X] }[/math]. Then
[math]\displaystyle{ \Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right). }[/math]
Proof.
Define the Doob martingale sequence [math]\displaystyle{ Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right] }[/math]. Obviously [math]\displaystyle{ Y_0=\mu }[/math] and [math]\displaystyle{ Y_n=X }[/math].
[math]\displaystyle{ \begin{align} |Y_i-Y_{i-1}| &= \left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\ &=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\ &=\left|X_i-\mathbf{E}[X_{i}]\right|\\ &\le b_i-a_i \end{align} }[/math]

Apply Azuma's inequality for the martingale [math]\displaystyle{ Y_0,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots, X_n }[/math], the Hoeffding's inequality is proved.

[math]\displaystyle{ \square }[/math]


The Bounded Difference Method

Combining Azuma's inequality with the construction of Doob martingales, we have the powerful Bounded Difference Method for concentration of measures.

For arbitrary random variables

Given a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] and a function [math]\displaystyle{ f }[/math]. The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).

Theorem (Method of averaged bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be arbitrary random variables and let [math]\displaystyle{ f }[/math] be a function of [math]\displaystyle{ X_0,X_1,\ldots, X_n }[/math] satisfying that, for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ |\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
Define the Doob Martingale sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] by setting [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and, for [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i] }[/math]. Then the above theorem is a restatement of the Azuma's inequality holding for [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math].
[math]\displaystyle{ \square }[/math]

For independent random variables

The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.

Definition (Lipschitz condition)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition, if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1. \end{align} }[/math]

In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.

The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.

Definition (Lipschitz condition, general version)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i. \end{align} }[/math]

The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.

Corollary (Method of bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be [math]\displaystyle{ n }[/math] independent random variables and let [math]\displaystyle{ f }[/math] be a function satisfying the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
For convenience, we denote that [math]\displaystyle{ \boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j) }[/math] for any [math]\displaystyle{ 1\le i\le j\le n }[/math].

We first show that the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], implies another condition called the averaged Lipschitz condition (ALC): for any [math]\displaystyle{ a_i,b_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i. }[/math]

And this condition implies the averaged bounded difference condition: for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i. }[/math]

Then by applying the method of averaged bounded differences, the corollary can be proved.

For any [math]\displaystyle{ a }[/math], by the law of total expectation,

[math]\displaystyle{ \begin{align} &\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\ &= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]. \end{align} }[/math]

Let [math]\displaystyle{ a=a_i }[/math] and [math]\displaystyle{ b_i }[/math], and take the diference. Then

[math]\displaystyle{ \begin{align} &\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\ &=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\ &\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\ &\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\ &=c_i. \end{align} }[/math]

Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.

By the law of total expectation,

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}]. }[/math]

We can trivially write [math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right] }[/math] as

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]. }[/math]

Hence, the difference is

[math]\displaystyle{ \begin{align} &\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\ &=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\ &\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\ &\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\ &=c_i. \end{align} }[/math]

The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.

[math]\displaystyle{ \square }[/math]

Applications

Occupancy problem

Throwing [math]\displaystyle{ m }[/math] balls uniformly and independently at random to [math]\displaystyle{ n }[/math] bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.

This problem can be described equivalently as follows. Let [math]\displaystyle{ f:[m]\rightarrow[n] }[/math] be a uniform random function from [math]\displaystyle{ [m]\rightarrow[n] }[/math]. We ask for the number of [math]\displaystyle{ i\in[n] }[/math] that [math]\displaystyle{ f^{-1}(i) }[/math] is empty.

For any [math]\displaystyle{ i\in[n] }[/math], let [math]\displaystyle{ X_i }[/math] indicate the emptiness of bin [math]\displaystyle{ i }[/math]. Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math] be the number of empty bins.

[math]\displaystyle{ \mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m. }[/math]

By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m. }[/math]

We want to know how [math]\displaystyle{ X }[/math] deviates from this expectation. The complication here is that [math]\displaystyle{ X_i }[/math] are not independent. So we alternatively look at a sequence of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], where [math]\displaystyle{ Y_j\in[n] }[/math] represents the bin into which the [math]\displaystyle{ j }[/math]th ball falls. Clearly [math]\displaystyle{ X }[/math] is function of [math]\displaystyle{ Y_1,\ldots, Y_m }[/math].

We than observe that changing the value of any [math]\displaystyle{ Y_i }[/math] can change the value of [math]\displaystyle{ X }[/math] by at most 1, because one ball can affect the emptiness of at most one bin. Thus as a function of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that

[math]\displaystyle{ \Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2} }[/math]

Thus, for sufficiently large [math]\displaystyle{ n }[/math] and [math]\displaystyle{ m }[/math], the number of empty bins is tightly concentrated around [math]\displaystyle{ n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}} }[/math]

Pattern Matching

Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots,X_n) }[/math] be a sequence of characters chosen independently and uniformly at random from an alphabet [math]\displaystyle{ \Sigma }[/math], where [math]\displaystyle{ m=|\Sigma| }[/math]. Let [math]\displaystyle{ \pi\in\Sigma^k }[/math] be an arbitrarily fixed string of [math]\displaystyle{ k }[/math] characters from [math]\displaystyle{ \Sigma }[/math], called a pattern. Let [math]\displaystyle{ Y }[/math] be the number of occurrences of the pattern [math]\displaystyle{ \pi }[/math] as a substring of the random string [math]\displaystyle{ X }[/math].

By the linearity of expectation, it is obvious that

[math]\displaystyle{ \mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k. }[/math]

We now look at the concentration of [math]\displaystyle{ Y }[/math]. The complication again lies in the dependencies between the matches. Yet we will see that [math]\displaystyle{ Y }[/math] is well tightly concentrated around its expectation if [math]\displaystyle{ k }[/math] is relatively small compared to [math]\displaystyle{ n }[/math].

For a fixed pattern [math]\displaystyle{ \pi }[/math], the random variable [math]\displaystyle{ Y }[/math] is a function of the independent random variables [math]\displaystyle{ (X_1,\ldots,X_n) }[/math]. Any character [math]\displaystyle{ X_i }[/math] participates in no more than [math]\displaystyle{ k }[/math] matches, thus changing the value of any [math]\displaystyle{ X_i }[/math] can affect the value of [math]\displaystyle{ Y }[/math] by at most [math]\displaystyle{ k }[/math]. [math]\displaystyle{ Y }[/math] satisfies the Lipschitz condition with constant [math]\displaystyle{ k }[/math]. Apply the method of bounded differences,

[math]\displaystyle{ \Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge tk\sqrt{n}\right]\le 2e^{-t^2/2} }[/math]

Combining unit vectors

Let [math]\displaystyle{ u_1,\ldots,u_n }[/math] be [math]\displaystyle{ n }[/math] unit vectors from some normed space. That is, [math]\displaystyle{ \|u_i\|=1 }[/math] for any [math]\displaystyle{ 1\le i\le n }[/math], where [math]\displaystyle{ \|\cdot\| }[/math] denote the vector norm (e.g. [math]\displaystyle{ \ell_1,\ell_2,\ell_\infty }[/math]) of the space.

Let [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n\in\{-1,+1\} }[/math] be independently chosen and [math]\displaystyle{ \Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2 }[/math].

Let

[math]\displaystyle{ v=\epsilon_1u_1+\cdots+\epsilon_nu_n, }[/math]

and

[math]\displaystyle{ X=\|v\|. }[/math]

This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable [math]\displaystyle{ X }[/math] is well concentrated around its mean.

[math]\displaystyle{ X }[/math] is a function of independent random variables [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n }[/math]. By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector [math]\displaystyle{ u_i }[/math] can only change the value of [math]\displaystyle{ X }[/math] for at most 2, thus [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:

[math]\displaystyle{ \Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}. }[/math]