组合数学 (Fall 2017)/Extremal graph theory and 高级算法 (Fall 2018)/Concentration of measure: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "== Forbidden Cliques == Extremal graph theory studies the problems like "how many edges that a graph <math>G</math> can have, if <math>G</math> has some property?" === Mantel...")
 
imported>Etone
(Created page with "= Conditional Expectations = The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by :<math> \mat...")
 
Line 1: Line 1:
== Forbidden Cliques ==
= Conditional Expectations =
Extremal graph theory studies the problems like  "how many edges that a graph <math>G</math> can have, if <math>G</math> has some property?"
The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by
=== Mantel's theorem ===
:<math>
We consider a typical extremal problem for graphs: the largest possible number of edges of '''triangle-free''' graphs, i.e. graphs contains no <math>K_3</math>.
\mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}].
</math>
In particular, if the event <math>\mathcal{E}</math> is <math>X=a</math>, the conditional expectation
:<math>
\mathbf{E}[Y\mid X=a]
</math>
defines a function
:<math>
f(a)=\mathbf{E}[Y\mid X=a].
</math>
Thus, <math>\mathbf{E}[Y\mid X]</math> can be regarded as a random variable <math>f(X)</math>.
 
;Example
:Suppose that we uniformly sample a human from all human beings. Let <math>Y</math> be his/her height, and let <math>X</math> be the country where he/she is from. For any country <math>a</math>, <math>\mathbf{E}[Y\mid X=a]</math> gives the average height of that country. And <math>\mathbf{E}[Y\mid X]</math> is the random variable which can be defined in either ways:
:* We choose a human uniformly at random from all human beings, and <math>\mathbf{E}[Y\mid X]</math> is the average height of the country where he/she comes from.
:* We choose a country at random with a probability ''proportional to its population'', and <math>\mathbf{E}[Y\mid X]</math> is the average height of the chosen country.
 
The following proposition states some fundamental facts about conditional expectation.


{{Theorem|Theorem (Mantel 1907)|
{{Theorem
:Suppose <math>G(V,E)</math> is graph on <math>n</math> vertice without triangles. Then <math>|E|\le\frac{n^2}{4}</math>.
|Proposition (fundamental facts about conditional expectation)|
:Let <math>X,Y</math> and <math>Z</math> be arbitrary random variables. Let <math>f</math> and <math>g</math> be arbitrary functions. Then
:# <math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>.
:# <math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>.
:# <math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
}}
}}
The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.
The first equation:
:<math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>
says that there are two ways to compute an average. Suppose again that <math>X</math> is the height of a uniform random human and <math>Y</math> is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.


We give three different proofs of the theorem. The first one uses induction and an argument based on pigeonhole principle. The second proof uses the famous Cauchy-Schwarz inequality in analysis. And the third proof uses another famous inequality: the inequality of the arithmetic and geometric mean.
The second equation:
:<math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>
is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height <math>X</math> and the country <math>Y</math>, let <math>Z</math> be the gender of the individual. Thus, <math>\mathbf{E}[X\mid Z]</math> is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.


{{Prooftitle|First proof. (pigeonhole principle)|
The third equation:
We prove an equivalent theorem: Any <math>G(V,E)</math> with <math>|V|=n</math> and <math>|E|>\frac{n^2}{4}</math> must have a triangle.
:<math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
looks obscure at the first glance, especially when considering that <math>X</math> and <math>Y</math> are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any <math>X=a</math>, the function value <math>f(X)=f(a)</math> becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value <math>X=a</math>,
:<math>
\mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a].
</math>


Use induction on <math>n</math>. The theorem holds trivially for <math>n\le 3</math>.
The proposition holds in more general cases when <math>X, Y</math> and <math>Z</math> are a sequence of random variables.


Induction hypothesis: assume the theorem hold for <math>|V|\le n-1</math>.  
= Martingales =
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after <math>n</math> losses, if the <math>(n+1)</math>th bet wins, then it gives a net profit of
:<math>
2^n-\sum_{i=1}^{n}2^{i-1}=1,
</math>
which is a positive number.


For <math>G</math> with <math>n</math> vertices, without loss of generality, assume that <math>|E|=\frac{n^2}{4}+1</math>, we will show that <math>G</math> must contain a triangle. Take a <math>uv\in E</math>, and let <math>H</math> be the subgraph of <math>G</math> induced by <math>V\setminus \{u,v\}</math>. Clearly, <math>H</math> has <math>n-2</math> vertices.
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.
:'''Case.1:''' If <math>H</math> has <math>>\frac{(n-2)^2}{4}</math> edges, then by the induction hypothesis, <math>H</math> has a triangle.
 
:'''Case.2:''' If <math>H</math> has <math>\le\frac{(n-2)^2}{4}</math> edges, then at least <math>\left(\frac{n^2}{4}+1\right)-\frac{(n-2)^2}{4}-1=n-1</math> edges are between <math>H</math> and <math>\{u,v\}</math>. By pigeonhole principle, there must be a vertex in <math>H</math> that is adjacent to both <math>u</math> and <math>v</math>. Thus, <math>G</math> has a triangle.
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables <math>X_0,X_1,\ldots,</math>, where <math>X_0</math> is his initial capital, and <math>X_i</math> represents his capital after the <math>i</math>th betting. Up to different betting strategies, <math>X_i</math> can be arbitrarily dependent on <math>X_0,\ldots,X_{i-1}</math>. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables <math>X_0,\ldots,X_{i-1}</math>, we will expect no change in the value of the present variable <math>X_{i}</math> on average. Random variables satisfying this property is called a '''martingale''' sequence.
 
{{Theorem
|Definition (martingale)|
:A sequence of random variables <math>X_0,X_1,\ldots</math> is a '''martingale''' if for all <math>i> 0</math>,
:: <math>\begin{align}
\mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}.
\end{align}</math>
}}
}}


{{Prooftitle|Second proof. (Cauchy-Schwarz inequality)|(Mantel's original proof)
==Examples ==
For any edge <math>uv\in E</math>, no vertex can be a neighbor of both <math>u</math> and <math>v</math>, or otherwise there will be a triangle. Thus, for any edge <math>uv\in E</math>, <math>d_u+d_v\le n</math>. It follows that
;coin flips
:<math>\sum_{uv\in E}(d_u+d_v)\le n|E|</math>.
:A fair coin is flipped for a number of times. Let <math>Z_j\in\{-1,1\}</math> denote the outcome of the <math>j</math>th flip. Let
Note that <math>d(v)</math> appears exactly <math>d_v</math> times in the sum, so that
::<math>X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j</math>.  
:<math>\sum_{uv\in E}(d_u+d_v)=\sum_{v\in V}d_v^2</math>.
:The random variables <math>X_0,X_1,\ldots</math> defines a martingale.
Applying Chauchy-Schwarz inequality,
;Proof
:<math>
:We first observe that <math>\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}]</math>, which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the '''Markov property''' in statistic processes.
n|E|\ge \sum_{uv\in E}(d_u+d_v)=\sum_{v\in V}d_v^2\ge\frac{\left(\sum_{v\in V}d_v\right)^2}{n}=\frac{4|E|^2}{n},
::<math>
\begin{align}
\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]
&= \mathbf{E}[X_i\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\
&= X_{i-1}
\end{align}
</math>
</math>
where the last equation is due to Euler's equality <math>\sum_{v\in V}d_v=2|E|</math>. The theorem follows.
}}


{{Prooftitle|Third proof. (inequality of the arithmetic and geometric mean)|
;Polya's urn scheme
Assume that <math>G(V,E)</math> has <math>|V|=n</math> vertices and is triangle-free.
: Consider an urn (just a container) that initially contains <math>b</math> balck balls and <math>w</math> white balls. At each step, we uniformly select a ball from the urn, and replace the ball with <math>c</math> balls of the same color. Let <math>X_0=b/(b+w)</math>, and <math>X_i</math> be the fraction of black balls in the urn after the <math>i</math>th step. The sequence <math>X_0,X_1,\ldots</math> is a martingale.


Let <math>A</math> be the largest independent set in <math>G</math> and let <math>\alpha=|A|</math>.  
;edge exposure in a random graph
Since <math>G</math> is triangle-free, for very vertex <math>v</math>, all its neighbors must form an independent set, thus <math>d(v)\le \alpha</math> for all <math>v\in V</math>.
:Consider a '''random graph''' <math>G</math> generated as follows. Let <math>[n]</math> be the set of vertices, and let <math>[m]={[n]\choose 2}</math> be the set of all possible edges. For convenience, we enumerate these potential edges by <math>e_1,\ldots, e_m</math>. For each potential edge <math>e_j</math>, we independently flip a fair coin to decide whether the edge <math>e_j</math> appears in <math>G</math>. Let <math>I_j</math> be the random variable that indicates whether <math>e_j\in G</math>. We are interested in some graph-theoretical parameter, say [http://mathworld.wolfram.com/ChromaticNumber.html chromatic number], of the random graph <math>G</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Let <math>X_0=\mathbf{E}[\chi(G)]</math>, and for each <math>i\ge 1</math>, let <math>X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}]</math>, namely, the expected chromatic number of the random graph after fixing the first <math>i</math> edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random graph.


Take <math>B=V\setminus A</math> and let <math>\beta=|B|</math>.
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.
Since <math>A</math> is an independent set, all edges in <math>E</math> must have at least one endpoint in <math>B</math>. Counting the edges in <math>E</math> according to their endpoints in <math>B</math>, we obtain <math>|E|\le\sum_{v\in B}d_v</math>. By the inequality of the arithmetic and geometric mean,
:<math>|E|\le\sum_{v\in B}d_v\le\alpha\beta\le\left(\frac{\alpha+\beta}{2}\right)^2=\frac{n^2}{4}</math>.
}}


=== Turán's theorem ===
== Generalizations ==
The famous Turán's theorem generalizes the Mantel's theorem for triangles to cliques of any specific size. This theorem is one of the most important results in extremal combinatorics, which initiates the studies of extremal graph theory.
The martingale can be generalized to be with respect to another sequence of random variables.
{{Theorem|Theorem (Turán 1941)|
{{Theorem
:Let <math>G(V,E)</math> be a graph with <math>|V|=n</math>. If <math>G</math> has no <math>r</math>-clique, <math>r\ge 2</math>, then
|Definition (martingale, general version)|  
::<math>|E|\le\frac{r-2}{2(r-1)}n^2</math>.
:A sequence of random variables <math>Y_0,Y_1,\ldots</math> is a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> if, for all <math>i\ge 0</math>, the following conditions hold:
:* <math>Y_i</math> is a function of <math>X_0,X_1,\ldots,X_i</math>;
:* <math>\begin{align}
\mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}.
\end{align}</math>
}}
}}
Therefore, a sequence <math>X_0,X_1,\ldots</math> is a martingale if it is a martingale with respect to itself.


We give an example of graphs with many edges which does not contain <math>K_r</math>.
The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.


Partition <math>V</math> into <math>r-1</math> disjoint classes <math>V=V_1\cup V_2\cup\cdots\cup V_{r-1}</math>, <math>n_i=|V_i|</math>, <math>n_1+n_2+\cdots+n_{r-1}=n</math>. For every two vertice <math>u,v</math>, <math>uv\in E</math> if and only if <math>u\in V_i</math> and <math>v\in V_j</math> for distinct <math>V_i</math> and <math>V_j</math>. The resulting graph is a '''complete <math>(r-1)</math>-partite graph''', denoted <math>K_{n_1,n_2,\ldots,n_{r-1}}</math>. It is obvious that any <math>(r-1)</math>-partite graph contains no <math>r</math>-clique since only those vertices from different classes can be adjacent.
=Azuma's Inequality=


A <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> has <math>\sum_{i<j}n_i n_j\,</math> edges, which is maximized when the numbers <math>n_i</math> are divided as evenly as possible, that is, if <math>n_i\in\left\{\left\lfloor\frac{n}{r-1}\right\rfloor,\left\lceil\frac{n}{r-1}\right\rceil\right\}</math> for every <math>1\le i\le r-1</math>.  
We introduce a martingale tail inequality, called Azuma's inequality.


{{Theorem|Definition|
{{Theorem
:We call a complete multipartite graph <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> with <math>n_i\in\left\{\left\lfloor\frac{n}{r-1}\right\rfloor,\left\lceil\frac{n}{r-1}\right\rceil\right\}</math> for every <math>i</math> a ''' Turán graph''', denoted <math>T(n,r-1)</math>.
|Azuma's Inequality|
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
::<math>
|X_{k}-X_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}
}}
;Example:Turán graph <math>T(13,4)</math>
Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.
[[File:Turan 13-4.svg|center|260px|Turán graph <math>T(13,4)</math>]]


Turán's theorem has been proved for many times by different mathematicians, with different tools. We show just a few.
Second, the condition that
:<math>
|X_{k}-X_{k-1}|\le c_k
</math>
is central to the proof. This condition is sometimes called the '''bounded difference condition'''. If we think of the martingale <math>X_0,X_1,\ldots</math> as a process evolving through time, where <math>X_i</math> gives some measurement at time <math>i</math>, the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.


The first proof uses induction;  the second proof uses a technique called "weight shifting"; and the third proof uses the probabilistic method. All of them are very powerful and frequently used proof techniques.
A special case is when the differences are bounded by a constant.  The following corollary is directly implied by the Azuma's inequality.  


{{Prooftitle|First proof. (induction)|(Turán's original proof)
{{Theorem
|Corollary|
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
::<math>
|X_{k}-X_{k-1}|\le c,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}.
\end{align}</math>
}}


Induction on <math>n</math>. It is easy to verify that the theorem holds for <math>n<r</math>.  
This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates <math>\omega(\sqrt{n})</math> far away from the starting point after <math>n</math> steps is bounded by <math>o(1)</math>.


Let <math>G</math> be a graph on <math>n</math> vertices without <math>r</math>-cliques where <math>n\ge r</math>. Suppose that <math>G</math> has a maximum number of edges among such graphs. <math>G</math> certainly has <math>(r-1)</math>-cliques, since otherwise we could add edges to <math>G</math>. Let <math>A</math> be an <math>(r-1)</math>-clique and let <math>B=V\setminus A</math>. Clearly <math>|A|=r-1</math> and <math>|B|=n-r+1</math>.
=== Generalization ===


By the  induction hypothesis, since <math>B</math> has no <math>r</math>-cliques, <math>|E(B)|\le\frac{r-2}{2(r-1)}(n-r+1)^2</math>. And <math>E(A)={r-1\choose 2}</math>. Since <math>G</math> has no <math>r</math>-clique, every <math>v\in B</math> is adjacent to at most <math>r-2</math> vertices in <math>A</math>, since otherwise <math>A</math> and <math>v</math> would form an <math>r</math>-clique. We obtain that the number edges crossing between <math>A</math> and <math>B</math> is <math>|E(A,B)|\le (r-2)|B|=(r-2)(n-r+1)</math>. Combining everything together,
Azuma's inequality can be generalized to a martingale with respect another sequence.
:<math>|E|=|E(A)|+|E(B)|+|E(A,B)|\le {r-1\choose 2}+\frac{r-2}{2(r-1)}(n-r+1)^2+(r-2)(n-r+1)=\frac{r-2}{2(r-1)}n^2</math>.
{{Theorem
|Azuma's Inequality (general version)|
:Let <math>Y_0,Y_1,\ldots</math> be a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> such that, for all <math>k\ge 1</math>,
::<math>
|Y_{k}-Y_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|Y_n-Y_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}
}}


{{Prooftitle|Second proof. (weight shifting)|(due to Motzkin and Straus)
=== The Proof of Azuma's Inueqality===
We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence <math>Y_i</math> conditioning on sequence <math>X_i</math>.  


Assign each vertex <math>v\in V</math> a nonnegative weight <math>w_v\ge 0</math>, and assume that <math>\sum_{v\in V}w_v=1</math>. We try to maximize the quantity
The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.
:<math>S=\sum_{uv\in E}w_uw_v</math>.
Let <math>W_u=\sum_{v:v\sim u}w_v\,</math> be the sum of the weights of <math>u</math>'s neighbors.
Note that <math>S</math> can also be computed as <math>S=\frac{1}{2}\sum_{u\in V}w_uW_u</math>.
For any nonadjacent pair of vertices <math>u\not\sim v</math>, supposed that <math>W_u\ge W_v</math>, then for any <math>\epsilon\ge 0</math>,
:<math>(w_u+\epsilon)W_u+(w_v-\epsilon)W_v\ge w_uW_u+w_vW_v</math>.
This means that we do not decrease <math>S</math> by shifting all of the weight of the vertex <math>v</math> to the vertex <math>u</math>. It follows that <math>S</math> is maximized when all of the weight is concentrated on a complete subgraph, i.e., a clique.


Now if <math>w_u>w_v>0</math>, then choose <math>\epsilon</math> with <math>0<\epsilon<w_u-w_v</math> and change <math>w_u'=w_u-\epsilon</math> and <math>w_v'=w_v+\epsilon</math>. This changes <math>S</math> to <math>S'=S+\epsilon(w_u-w_v)-\epsilon^2>S</math>. Thus, the maximal value of <math>S</math> is attained when all nonzero weights are equal and concentrated on a clique.
In order to bound the probability of <math>|X_n-X_0|\ge t</math>, we first bound the upper tail <math>\Pr[X_n-X_0\ge t]</math>. The bound of the lower tail can be symmetrically proved with the <math>X_i</math> replaced by <math>-X_i</math>.


<math>G</math> has at most an <math>(r-1)</math>-clique, thus <math>S\le{r-1\choose 2}\frac{1}{(r-1)^2}=\frac{r-2}{2(r-1)}</math>.
==== Represent the deviation as the sum of differences ====
We define the '''martingale difference sequence''': for <math>i\ge 1</math>, let
:<math>
Y_i=X_i-X_{i-1}.
</math>
It holds that
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}]
&=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=X_{i-1}-X_{i-1}\\
&=0.
\end{align}
</math>
The second to the last equation is due to the fact that <math>X_0,X_1,\ldots</math> is a martingale and the definition of conditional expectation.
 
Let <math>Z_n</math> be the accumulated differences
:<math>
Z_n=\sum_{i=1}^n Y_i.
</math>


As we argued above, this inequality hold for any nonnegative weight assignments with <math>\sum_{v\in V}w_v=1</math>. In particular, for the case that all <math>w_v=\frac{1}{n}</math>,
The deviation <math>(X_n-X_0)</math> can be computed by the accumulated differences:
:<math>S=\sum_{uv\in E}w_uw_v=\frac{|E|}{n^2}</math>.
:<math>
Thus,
\begin{align}
:<math>\frac{|E|}{n^2}\le \frac{r-2}{2(r-1)}</math>,
X_n-X_0
which implies the theorem.
&=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\
}}
&=\sum_{i=1}^n Y_i\\
&=Z_n.
\end{align}
</math>


{{Prooftitle|Third proof. (the probabilistic method)|(due to Alon and Spencer)
We then only need to upper bound the probability of the event <math>Z_n\ge t</math>.


Write <math>\omega(G)</math> for the number of vertices in a largest clique, called the '''clique number''' of <math>G</math>.  
==== Apply Markov's inequality to the moment generating function ====
:'''Claim:''' <math>\omega(G)\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.
The event <math>Z_n\ge t</math> is equivalent to that <math>e^{\lambda Z_n}\ge e^{\lambda t}</math> for <math>\lambda>0</math>. Apply Markov's inequality, we have
We prove this by the probabilistic method. Fix a random ordering of vertices in <math>V</math>, say <math>v_1,v_2,\ldots,v_n</math>. We construct a clique as follows:
:<math>
*for <math>i=1,2,\ldots, n</math>, add <math>v_i</math> to <math>S</math> iff all vertices in current <math>S</math> are adjacent to <math>v_i</math>.
\begin{align}
It is obvious that an <math>S</math> constructed in this way is a clique. We now show that <math>\mathbf{E}[|S|]\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.
\Pr\left[Z_n\ge t\right]
&=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}.
\end{align}
</math>
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>.


Let <math>X_v</math> be the random variable that indicates whether <math>v\in S</math>, i.e.,
==== Bound the moment generating functions ====
The moment generating function
:<math>
:<math>
X_v=\begin{cases}
\begin{align}
1 & v\in S,\\
\mathbf{E}\left[e^{\lambda Z_n}\right]
0 & \mbox{otherwise.}
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
\end{cases}
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]
\end{align}
</math>
</math>
Note that a vertex <math>v\in S</math> if <math>v</math> is ranked before all its <math>n-d_v-1</math> non-neighbors in the random ordering. The probability that this event occurs is <math>\frac{1}{n-d_v}</math>. Thus,
The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.
:<math>\mathbf{E}[X_v]=\Pr[v\in S]\ge\frac{1}{n-d_v}.</math>
Observe that <math>|S|=\sum_{v\in V}X_v</math>. Due to linearity of expectation,
:<math>\mathbf{E}[|S|]=\sum_{v\in V}\mathbf{E}[X_v]\ge\sum_{v\in V}\frac{1}{n-d_v}</math>.
There must exists a clique of at least such size, so that <math>\omega(G)\ge\sum_{v\in V}\frac{1}{n-d_v}</math>. The claim is proved.


Apply the Cauchy-Schwarz inequality
We then upper bound the <math>\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]</math> by a constant. To do so, we need the following technical lemma which is proved by the convexity of  <math>e^{\lambda Y_n}</math>.
:<math>\left(\sum_{v\in V}a_vb_v\right)^2\le\left(\sum_{v\in V}^na_v^2\right)\left(\sum_{v\in V}^nb_v^2\right)</math>.
 
Set <math>a_v=\sqrt{n-d_v}</math> and <math>b_v=\frac{1}{\sqrt{n-d_v}}</math>, then <math>a_vb_v=1</math> and so
{{Theorem
:<math>n^2\le\sum_{v\in V}(n-d_v)\sum_{v\in V}\frac{1}{n-d_v}\le\omega(G)\sum_{v\in V}(n-d_v).</math>
|Lemma|
By the assumption of Turán's theorem, <math>\omega(G)\le r-1</math>. Recall the handshaking lemma <math>2|E|=\sum_{v\in V}d_v</math>. The above inequality gives us
:Let <math>X</math> be a random variable such that <math>\mathbf{E}[X]=0</math> and <math>|X|\le c</math>. Then for <math>\lambda>0</math>,
:<math>n^2\le (r-1)(n^2-2|E|)</math>,
::<math>
which implies the theorem.
\mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}.
</math>
}}
}}
{{Proof| Observe that for <math>\lambda>0</math>, the function <math>e^{\lambda X}</math> of the variable <math>X</math> is convex in the interval <math>[-c,c]</math>. We draw a line between the two endpoints points <math>(-c, e^{-\lambda c})</math> and <math>(c, e^{\lambda c})</math>. The curve of <math>e^{\lambda X}</math> lies entirely below this line. Thus,
:<math>
\begin{align}
e^{\lambda X}
&\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}).
\end{align}
</math>


Our last proof uses the idea of vertex duplication. It does not only prove the edge bound of Turán's theorem, but also shows that Turán graphs are the <font color=red>only</font> possible extremal graphs.
Since <math>\mathbf{E}[X]=0</math>, we have
{{Prooftitle|Fourth proof.|
:<math>
Let <math>G(V,E)</math> be a <math>r</math>-clique-free graph on <math>n</math> vertices with a maximum number of edges.
\begin{align}
:'''Claim:''' <math>G</math> does not contain three vertices <math>u,v,w</math> such that <math>uv\in E</math> but <math>uw\not\in E, vw\not\in E</math>.
\mathbf{E}[e^{\lambda X}]
Suppose otherwise. There are two cases.
&\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\
* '''Case.1:''' <math>d(w)<d(u)</math> or <math>d(w)<d(v)</math>. Without loss of generality, suppose that <math>d(w)<d(u)</math>. We duplicate <math>u</math> by creating a new vertex <math>u'</math> which has exactly the same neighbors as <math>u</math> (but <math>uu'</math> is not an edge). Such duplication will not increase the clique size. We then remove <math>w</math>. The resulting graph <math>G'</math> is still <math>r</math>-clique-free, and has <math>n</math> vertices. The number of edges in <math>G'</math> is
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\
::<math>|E(G')|=|E(G)|+d(u)-d(w)>|E(G)|\,</math>,
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}.
:which contradicts the assumption that <math>|E(G)|</math> is maximal.
\end{align}
* '''Case.2:''' <math>d(w)\ge d(u)</math> and <math>d(w)\ge d(v)</math>. Duplicate <math>w</math> twice and delete <math>u</math> and <math>v</math>. The new graph <math>G'</math> has no <math>r</math>-clique, and the number of edges is
</math>
::<math>|E(G')|=|E(G)|+2d(w)-(d(u)+d(v)+1)>|E(G)|\,</math>.
:Contradiction again.


The claim implies that <math>uv\not\in E</math> defines an equivalence relation on vertices (to be more precise, it guarantees the transitivity of the relation, while the reflexivity and symmetry hold directly). Graph <math>G</math> must be a complete multipartite graph <math>K_{n_1,n_2,\ldots,n_{r-1}}</math> with <math>n_1+n_2+\cdots +n_{r-1}=n</math>. Optimize the edge number, we have the Turán graph.
By expanding both sides as Taylor's series, it can be verified that <math>\frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2}</math>.
}}
}}


== Forbidden Cycles ==
Apply the above lemma to the random variable
Another direction to generalize Mantel's theorem other than Turán's theorem is to see a triangle as a 3-cycle rather than 3-clique. We then ask for the extremal bound for graphs without certain cycle structures.
:<math>
Recall that the '''girth''' of a graph <math>G</math> is the length of the shortest cycle in <math>G</math>. A graph is triangle-free if and only if its girth <math>g(G)\ge 4</math>.
(Y_n \mid X_0,\ldots,X_{n-1})
Matel's theorem can be seen as a bound on the edge number of graphs with girth <math>g(G)\ge 4</math>. The next theorem extends this bound to the graphs with <math>g(G)\ge 5</math>, i.e., graphs without triangles and quadrilaterals ("squares").
</math>
 
We have already shown that its expectation
<math>
\mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0,
</math>
and by the bounded difference condition of Azuma's inequality, we have
<math>
|Y_n|=|(X_n-X_{n-1})|\le c_n.
</math>
Thus, due to the above lemma, it holds that
:<math>
\mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}.
</math>
 
Back to our analysis of the expectation <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>, we have
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\
&= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] .
\end{align}
</math>


{{Theorem|Theorem|
Apply the same analysis to <math>\mathbf{E}\left[e^{\lambda Z_{n-1}}\right]</math>, we can solve the above recursion by
:Let <math>G(V,E)</math> be a graph on <math>n</math> vertices. If girth <math>g(G)\ge 5</math> then <math>|E|\le\frac{1}{2}n\sqrt{n-1}</math>.
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\
&= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right).
\end{align}
</math>
 
Go back to the Markov's inequality,
:<math>
\begin{align}
\Pr\left[Z_n\ge t\right]
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right).
\end{align}
</math>
 
We then only need to choose a proper <math>\lambda>0</math>.
 
==== Optimization ====
By choosing <math>\lambda=\frac{t}{\sum_{k=1}^n c_k^2}</math>, we have that
:<math>
\exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
</math>
Thus, the probability
:<math>
\begin{align}
\Pr\left[X_n-X_0\ge t\right]
&=\Pr\left[Z_n\ge t\right]\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\
&= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
\end{align}
</math>
The upper tail of Azuma's inequality is proved. By replacing <math>X_i</math> by <math>-X_i</math>, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.
 
=The Doob martingales =
The following definition describes a very general approach for constructing an important type of martingales.
 
{{Theorem
|Definition (The Doob sequence)|
: The Doob sequence of a function <math>f</math> with respect to a sequence of random variables <math>X_1,\ldots,X_n</math> is defined by
::<math>
Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n.
</math>
:In particular, <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and <math>Y_n=f(X_1,\ldots,X_n)</math>.
}}
}}
{{Proof|
Suppose <math>g(G)\ge 5</math>. Let <math>v_1,v_2,\ldots,v_d</math> be the neighbors of a vertex <math>u</math>, where <math>d=d(u)</math>. Let <math>S_i=\{v\in V\mid v\sim v_i\wedge v\neq u\}</math> be the set of neighbors of <math>v_i</math> other than <math>u</math>.


* For any <math>v_i,v_j</math>, <math>v_iv_j\not\in E</math> since <math>G</math> has no triangle. Thus, <math>S_i\cap\{u,v_1,v_2,\ldots,v_d\}=\emptyset</math> for every <math>i</math>.
The Doob sequence of a function defines a martingale. That is
* No vertex other than <math>u</math> can be adjacent to more than one vertices in <math>v_1,v_2,\ldots,v_d</math> since there is no <math>C_4</math> in <math>G</math>. Thus, <math>S_i\cap S_j=\emptyset</math> for any distinct <math>i</math> and <math>j</math>.
::<math>
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1},
</math>
for any <math>0\le i\le n</math>.
 
To prove this claim, we recall the definition that <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]</math>, thus,
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]
&=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\
&=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\
&=Y_{i-1},
\end{align}
</math>
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.
 
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function <math>f(X_1,\ldots,X_n)</math> of random variables <math>X_1,\ldots,X_n</math>. The Doob sequence <math>Y_0,Y_1,\ldots,Y_n</math> represents a sequence of refined estimates of the value of <math>f(X_1,\ldots,X_n)</math>, gradually using more information on the values of the random variables <math>X_1,\ldots,X_n</math>. The first element <math>Y_0</math> is just the expectation of <math>f(X_1,\ldots,X_n)</math>. Element <math>Y_i</math> is the expected value of <math>f(X_1,\ldots,X_n)</math> when the values of <math>X_1,\ldots,X_{i}</math> are known, and <math>Y_n=f(X_1,\ldots,X_n)</math> when <math>f(X_1,\ldots,X_n)</math> is fully determined by <math>X_1,\ldots,X_n</math>.


Therefore, <math>\{u,v_1,v_2,\ldots,v_d\}\cup S_1\cup S_2\cup\cdots\cup S_d\subseteq V</math> implies that
The following two Doob martingales arise in evaluating the parameters of random graphs.  
:<math>(d+1)+|S_1|+|S_2|+\cdots+|S_d|=(d+1)+(d(v_1)-1)+(d(v_2)-1)+\cdots+(d(v_d)-1)\le n</math>,
so that <math>\sum_{v:v\sim u}d(v)\le n-1</math>.


By Cauchy-Schwarz inequality,
;edge exposure martingale
:<math>n(n-1)\ge \sum_{u\in V}\sum_{v:v\sim u}d(v)=\sum_{v\in V}d(v)^2\ge\frac{\left(\sum_{v\in V}d(v)\right)}{n}=\frac{4|E|^2}{n}</math>,
:Let <math>G</math> be a random graph on <math>n</math> vertices. Let <math>f</math> be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that <math>m={n\choose 2}</math>. Fix an arbitrary numbering of potential edges between the <math>n</math> vertices, and denote the edges as <math>e_1,\ldots,e_m</math>. Let
which implies that <math>|E|\le\frac{1}{2}n\sqrt{n-1}</math>.
::<math>
X_i=\begin{cases}
1& \mbox{if }e_i\in G,\\
0& \mbox{otherwise}.
\end{cases}
</math>
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,m</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''edge exposure martingale'''.
 
;vertex exposure martingale
: Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is <math>[n]</math>. Let <math>X_i</math> be the subgraph of <math>G</math> induced by the vertex set <math>[i]</math>, i.e. the first <math>i</math> vertices.
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,n</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''vertex exposure martingale'''.
 
===Chromatic number===
The random graph <math>G(n,p)</math> is the graph on <math>n</math> vertices <math>[n]</math>, obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability <math>p</math>. We denote <math>G\sim G(n,p)</math> if <math>G</math> is generated in this way.
 
{{Theorem
|Theorem [Shamir and Spencer (1987)]|
:Let <math>G\sim G(n,p)</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Then
::<math>\begin{align}
\Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}.
\end{align}</math>
}}
{{Proof| Consider the vertex exposure martingale
:<math>
Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i]
</math>
where each <math>X_k</math> exposes the induced subgraph of <math>G</math> on vertex set <math>[k]</math>. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition
:<math>
|Y_i-Y_{i-1}|\le 1
</math>
is satisfied. Now apply the Azuma's inequality for the martingale <math>Y_1,\ldots,Y_n</math> with respect to <math>X_1,\ldots,X_n</math>.
}}
}}


== Erdős–Stone theorem ==
For <math>t=\omega(1)</math>, the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.
We introduce a notation for the number of edges in extremal graphs with a specific forbidden substructure.
 
{{Theorem|Definition|
=== Hoeffding's Inequality===
:Let <math>\mathrm{ex}(n,H)</math> denote the largest number of edges that a graph <math>G\not\supseteq H</math> on <math>n</math> vertices can have.
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent ''trials''. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
{{Theorem
|Hoeffding's inequality|
: Let <math>X=\sum_{i=1}^nX_i</math>, where <math>X_1,\ldots,X_n</math> are independent random variables with <math>a_i\le X_i\le b_i</math> for each <math>1\le i\le n</math>. Let <math>\mu=\mathbf{E}[X]</math>. Then
::<math>
\Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right).
</math>
}}
}}
With this notation, Turán's theorem can be restated as
{{Proof| Define the Doob martingale sequence <math>Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right]</math>. Obviously <math>Y_0=\mu</math> and <math>Y_n=X</math>.
{{Theorem|Turán's theorem (restated)|
 
:<math>\mathrm{ex}(n,K_r)\le\frac{r-2}{2(r-1)}n^2</math>.
:<math>
\begin{align}
|Y_i-Y_{i-1}|
&=
\left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\
&=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\
&=\left|X_i-\mathbf{E}[X_{i}]\right|\\
&\le b_i-a_i
\end{align}
</math>
Apply Azuma's inequality for the martingale <math>Y_0,\ldots,Y_n</math> with respect to <math>X_1,\ldots, X_n</math>,  the Hoeffding's inequality is proved.
}}
}}


Let <math>K_s^r=K_{\underbrace{s,s,\cdots,s}_{r}}</math> be the complete <math>r</math>-partite graph with <math>s</math> vertices in each class, i.e., the Turán graph <math>T(rs,r)</math>.
=The Bounded Difference Method=
The Erdős–Stone theorem (also referred as the '''fundamental theorem of extremal graph theory''') gives an asymptotic bound on <math>\mathrm{ex}(n,K_s^r)</math>, i.e., the largest number of edges that an <math>n</math>-vertex graph can have to not contain <math>K_s^r</math>.
Combining Azuma's inequality with the construction of Doob martingales, we have the powerful ''Bounded Difference Method'' for concentration of measures.


{{Theorem|Fundamental theorem of extremal graph theory (Erdős–Stone 1946)|
== For arbitrary random variables ==
:For any integers <math>r\ge 2</math> and <math>s\ge 1</math>, and any <math>\epsilon>0</math>, if <math>n</math> is sufficiently large then every graph on <math>n</math> vertices and with at least <math>\left(\frac{r-2}{2(r-1)}+\epsilon\right)n^2</math> edges contains <math>K_{r,s}</math> as a subgraph, i.e.,
Given a sequence of random variables <math>X_1,\ldots,X_n</math> and a function <math>f</math>.  The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).
:::<math>\mathrm{ex}(n,K_s^r)= \left(\frac{r-2}{2(r-1)}+o(1)\right)n^2</math>.
 
{{Theorem
|Theorem (Method of averaged bounded differences)|
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be arbitrary random variables and let <math>f</math> be a function of <math>X_0,X_1,\ldots, X_n</math> satisfying that, for all <math>1\le i\le n</math>,
::<math>
|\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i,
</math>
:Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
{{Proof| Define the Doob Martingale sequence <math>Y_0,Y_1,\ldots,Y_n</math> by setting <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and, for <math>1\le i\le n</math>, <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i]</math>. Then the above theorem is a restatement of the Azuma's inequality holding for <math>Y_0,Y_1,\ldots,Y_n</math>.
}}
}}


The theorem is called fundamental because of its single most important corollary: it relate the extremal bound for an arbitrary subgraph <math>H</math> to a very natural parameter of <math>H</math>, its chromatic number.
== For independent random variables ==
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.


Recall that <math>\chi(G)</math> is the '''chromatic number''' of <math>G</math>, the smallest number of colors that one can use to color the vertices so that no adjacent vertices have the same color.
{{Theorem
|Definition (Lipschitz condition)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1.
\end{align}</math>
}}
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.  


{{Theorem|Corollary|
The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
:For every nonempty graph <math>H</math>,
{{Theorem
::<math>\lim_{n\rightarrow\infty}\frac{\mathrm{ex}(n,H)}{{n\choose 2}}=\frac{\chi(H)-2}{\chi(H)-1}</math>.
|Definition (Lipschitz condition, general version)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i.
\end{align}</math>
}}
}}
{{Prooftitle|Proof of corollary|
Let <math>r=\chi(H)</math>.


Note that <math>T(n,r-1)</math> can be colored with <math>r-1</math> colors, one color for each part. Thus, <math>H\not\subseteq T(n,r-1)</math>, since otherwise <math>H</math> can also be colored with <math>r-1</math> colors, contradicting that <math>\chi(H)=1</math>. By definition, <math>\mathrm{ex}(n,H)</math> is the maximum number of edges that an <math>n</math>-vertex graph <math>G\not\supseteq H</math> can have. Thus,
The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
:<math>|T(n,r-1)|\le\mathrm{ex}(n,H)</math>.
{{Theorem
It is not hard to see that
|Corollary (Method of bounded differences)|
:<math>|T(n,r-1)|\ge {r-1\choose 2}\left\lfloor\frac{n}{r-1}\right\rfloor^2\ge{r-1\choose 2}\left(\frac{n}{r-1}-1\right)^2=\left(\frac{r-2}{2(r-1)}-o(1)\right)n^2</math>.
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be <math>n</math> '''independent''' random variables and let <math>f</math> be a function satisfying the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>. Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
 
{{Proof| For convenience, we denote that <math>\boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j)</math> for any <math>1\le i\le j\le n</math>.
 
We first show that the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, implies another condition called the averaged Lipschitz condition (ALC): for any <math>a_i,b_i</math>, <math>1\le i\le n</math>,
:<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i.
</math>
And this condition implies the averaged bounded difference condition: for all <math>1\le i\le n</math>,
::<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i.
</math>
Then by applying the method of averaged bounded differences, the corollary can be proved.
 
For any <math>a</math>, by the law of total expectation,
:<math>
\begin{align}
&\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\
&= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right].
\end{align}
</math>
 
Let <math>a=a_i</math> and <math>b_i</math>, and take the diference. Then
:<math>
\begin{align}
&\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\
&=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\
&\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\
&\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\
&=c_i.
\end{align}
</math>
 
Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.
 
By the law of total expectation,
:<math>
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}].
</math>


On the other hand, any finite graph <math>H</math> with chromatic number <math>r</math> has that <math>H\subseteq K_s^r</math> for all sufficiently large <math>s</math>. We just connect all pairs of vertices from different color classes. Thus,
We can trivially write <math>\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]</math> as
:<math>\mathrm{ex}(n,H)\le\mathrm{ex}(n,K_s^r)</math>.
Due to Erdős–Stone theorem,
:<math>\mathrm{ex}(n,K_s^r)=\left(\frac{r-2}{2(r-1)}+o(1)\right)n^2</math>.
Altogether, we have
:<math>
:<math>
\frac{r-2}{r-1}-o(1)\le\frac{|T(n,r-1)|}{{n\choose 2}}\le \frac{\mathrm{ex}(n,H)}{{n\choose 2}} \le \frac{\mathrm{ex}(n,K_s^r)}{{n\choose 2}}=\frac{r-2}{r-1}+o(1)
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right].
</math>
</math>
The theorem follows.
 
Hence, the difference is
:<math>
\begin{align}
&\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\
&=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\
&\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\
&\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\
&=c_i.
\end{align}
</math>
 
The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
}}
}}


== References ==
== Applications ==
* van Lin and Wilson. ''A course in combinatorics.'' Cambridge Press. Chapter 4.
 
* Aigner and Ziegler. ''Proofs from THE BOOK, 4th Edition.'' Springer-Verlag. [[media:PFTB_chap36.pdf| Chapter 36]].
=== Occupancy problem ===
* Diestel. ''Graph Theory, 3rd Edition''. Springer-Verlag 2000. [[media:Diestel2ed_chap7.pdf|Chapter 7]].
Throwing <math>m</math> balls uniformly and independently at random to <math>n</math> bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.
 
This problem can be described equivalently as follows. Let <math>f:[m]\rightarrow[n]</math> be a uniform random function from <math>[m]\rightarrow[n]</math>. We ask for the number of <math>i\in[n]</math> that <math>f^{-1}(i)</math> is empty.
 
For any <math>i\in[n]</math>, let <math>X_i</math> indicate the emptiness of bin <math>i</math>. Let <math>X=\sum_{i=1}^nX_i</math> be the number of empty bins.
:<math>
\mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m.
</math>
By the linearity of expectation,
:<math>
\mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m.
</math>
 
We want to know how <math>X</math> deviates from this expectation. The complication here is that <math>X_i</math> are not independent. So we alternatively look at a sequence of independent random variables <math>Y_1,\ldots, Y_m</math>, where <math>Y_j\in[n]</math> represents the bin into which the <math>j</math>th ball falls. Clearly <math>X</math> is function of <math>Y_1,\ldots, Y_m</math>.
 
We than observe that changing the value of any <math>Y_i</math> can change the value of <math>X</math> by at most 1, because one ball can affect the emptiness of at most one bin.
Thus as a function of independent random variables <math>Y_1,\ldots, Y_m</math>, <math>X</math> satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that
:<math>
\Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2}
</math>
 
Thus, for sufficiently large <math>n</math> and <math>m</math>, the number of empty bins is tightly concentrated around <math>n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}}</math>
 
=== Pattern Matching ===
Let <math>\boldsymbol{X}=(X_1,\ldots,X_n)</math> be a sequence of characters chosen independently and uniformly at random from an alphabet <math>\Sigma</math>, where <math>m=|\Sigma|</math>. Let <math>\pi\in\Sigma^k</math> be an arbitrarily fixed string of <math>k</math> characters from <math>\Sigma</math>, called a ''pattern''. Let <math>Y</math> be the number of occurrences of the pattern <math>\pi</math> as a substring of the random string <math>X</math>.
 
By the linearity of expectation, it is obvious that
:<math>
\mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k.
</math>
 
We now look at the concentration of <math>Y</math>. The complication again lies in the dependencies between the matches. Yet we will see that <math>Y</math> is well tightly concentrated around its expectation if <math>k</math> is relatively small compared to <math>n</math>.
 
For a fixed pattern <math>\pi</math>, the random variable <math>Y</math> is a function of the independent random variables <math>(X_1,\ldots,X_n)</math>. Any character <math>X_i</math> participates in no more than <math>k</math> matches, thus changing the value of any <math>X_i</math> can affect the value of <math>Y</math> by at most <math>k</math>. <math>Y</math> satisfies the Lipschitz condition with constant <math>k</math>. Apply the method of bounded differences,
:<math>
\Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge  tk\sqrt{n}\right]\le 2e^{-t^2/2}
</math>
 
=== Combining unit vectors ===
Let <math>u_1,\ldots,u_n</math> be <math>n</math> unit vectors from some normed space. That is, <math>\|u_i\|=1</math> for any <math>1\le i\le n</math>, where <math>\|\cdot\|</math> denote the vector norm (e.g. <math>\ell_1,\ell_2,\ell_\infty</math>) of the space.
 
Let <math>\epsilon_1,\ldots,\epsilon_n\in\{-1,+1\}</math> be independently chosen and <math>\Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2</math>.
 
Let
:<math>v=\epsilon_1u_1+\cdots+\epsilon_nu_n,
</math>
and
:<math>
X=\|v\|.
</math>
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable <math>X</math> is well concentrated around its mean.
 
<math>X</math> is a function of independent random variables <math>\epsilon_1,\ldots,\epsilon_n</math>.
By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector <math>u_i</math> can only change the value of <math>X</math> for at most 2, thus <math>X</math> satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:
:<math>
\Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}.
</math>

Latest revision as of 02:23, 19 September 2018

Conditional Expectations

The conditional expectation of a random variable [math]\displaystyle{ Y }[/math] with respect to an event [math]\displaystyle{ \mathcal{E} }[/math] is defined by

[math]\displaystyle{ \mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}]. }[/math]

In particular, if the event [math]\displaystyle{ \mathcal{E} }[/math] is [math]\displaystyle{ X=a }[/math], the conditional expectation

[math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math]

defines a function

[math]\displaystyle{ f(a)=\mathbf{E}[Y\mid X=a]. }[/math]

Thus, [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] can be regarded as a random variable [math]\displaystyle{ f(X) }[/math].

Example
Suppose that we uniformly sample a human from all human beings. Let [math]\displaystyle{ Y }[/math] be his/her height, and let [math]\displaystyle{ X }[/math] be the country where he/she is from. For any country [math]\displaystyle{ a }[/math], [math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math] gives the average height of that country. And [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the random variable which can be defined in either ways:
  • We choose a human uniformly at random from all human beings, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the country where he/she comes from.
  • We choose a country at random with a probability proportional to its population, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the chosen country.

The following proposition states some fundamental facts about conditional expectation.

Proposition (fundamental facts about conditional expectation)
Let [math]\displaystyle{ X,Y }[/math] and [math]\displaystyle{ Z }[/math] be arbitrary random variables. Let [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] be arbitrary functions. Then
  1. [math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math].
  2. [math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math].
  3. [math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].

The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.

The first equation:

[math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math]

says that there are two ways to compute an average. Suppose again that [math]\displaystyle{ X }[/math] is the height of a uniform random human and [math]\displaystyle{ Y }[/math] is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.

The second equation:

[math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math]

is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height [math]\displaystyle{ X }[/math] and the country [math]\displaystyle{ Y }[/math], let [math]\displaystyle{ Z }[/math] be the gender of the individual. Thus, [math]\displaystyle{ \mathbf{E}[X\mid Z] }[/math] is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.

The third equation:

[math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].

looks obscure at the first glance, especially when considering that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any [math]\displaystyle{ X=a }[/math], the function value [math]\displaystyle{ f(X)=f(a) }[/math] becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value [math]\displaystyle{ X=a }[/math],

[math]\displaystyle{ \mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a]. }[/math]

The proposition holds in more general cases when [math]\displaystyle{ X, Y }[/math] and [math]\displaystyle{ Z }[/math] are a sequence of random variables.

Martingales

"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after [math]\displaystyle{ n }[/math] losses, if the [math]\displaystyle{ (n+1) }[/math]th bet wins, then it gives a net profit of

[math]\displaystyle{ 2^n-\sum_{i=1}^{n}2^{i-1}=1, }[/math]

which is a positive number.

However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.

Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots, }[/math], where [math]\displaystyle{ X_0 }[/math] is his initial capital, and [math]\displaystyle{ X_i }[/math] represents his capital after the [math]\displaystyle{ i }[/math]th betting. Up to different betting strategies, [math]\displaystyle{ X_i }[/math] can be arbitrarily dependent on [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math]. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math], we will expect no change in the value of the present variable [math]\displaystyle{ X_{i} }[/math] on average. Random variables satisfying this property is called a martingale sequence.

Definition (martingale)
A sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if for all [math]\displaystyle{ i\gt 0 }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}. \end{align} }[/math]

Examples

coin flips
A fair coin is flipped for a number of times. Let [math]\displaystyle{ Z_j\in\{-1,1\} }[/math] denote the outcome of the [math]\displaystyle{ j }[/math]th flip. Let
[math]\displaystyle{ X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j }[/math].
The random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] defines a martingale.
Proof
We first observe that [math]\displaystyle{ \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}] }[/math], which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the Markov property in statistic processes.
[math]\displaystyle{ \begin{align} \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] &= \mathbf{E}[X_i\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\ &= X_{i-1} \end{align} }[/math]
Polya's urn scheme
Consider an urn (just a container) that initially contains [math]\displaystyle{ b }[/math] balck balls and [math]\displaystyle{ w }[/math] white balls. At each step, we uniformly select a ball from the urn, and replace the ball with [math]\displaystyle{ c }[/math] balls of the same color. Let [math]\displaystyle{ X_0=b/(b+w) }[/math], and [math]\displaystyle{ X_i }[/math] be the fraction of black balls in the urn after the [math]\displaystyle{ i }[/math]th step. The sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale.
edge exposure in a random graph
Consider a random graph [math]\displaystyle{ G }[/math] generated as follows. Let [math]\displaystyle{ [n] }[/math] be the set of vertices, and let [math]\displaystyle{ [m]={[n]\choose 2} }[/math] be the set of all possible edges. For convenience, we enumerate these potential edges by [math]\displaystyle{ e_1,\ldots, e_m }[/math]. For each potential edge [math]\displaystyle{ e_j }[/math], we independently flip a fair coin to decide whether the edge [math]\displaystyle{ e_j }[/math] appears in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ I_j }[/math] be the random variable that indicates whether [math]\displaystyle{ e_j\in G }[/math]. We are interested in some graph-theoretical parameter, say chromatic number, of the random graph [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ X_0=\mathbf{E}[\chi(G)] }[/math], and for each [math]\displaystyle{ i\ge 1 }[/math], let [math]\displaystyle{ X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}] }[/math], namely, the expected chromatic number of the random graph after fixing the first [math]\displaystyle{ i }[/math] edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random graph.

It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.

Generalizations

The martingale can be generalized to be with respect to another sequence of random variables.

Definition (martingale, general version)
A sequence of random variables [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] is a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] if, for all [math]\displaystyle{ i\ge 0 }[/math], the following conditions hold:
  • [math]\displaystyle{ Y_i }[/math] is a function of [math]\displaystyle{ X_0,X_1,\ldots,X_i }[/math];
  • [math]\displaystyle{ \begin{align} \mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}. \end{align} }[/math]

Therefore, a sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if it is a martingale with respect to itself.

The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.

Azuma's Inequality

We introduce a martingale tail inequality, called Azuma's inequality.

Azuma's Inequality
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]

Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.

Second, the condition that

[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k }[/math]

is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale [math]\displaystyle{ X_0,X_1,\ldots }[/math] as a process evolving through time, where [math]\displaystyle{ X_i }[/math] gives some measurement at time [math]\displaystyle{ i }[/math], the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.

A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.

Corollary
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}. \end{align} }[/math]

This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates [math]\displaystyle{ \omega(\sqrt{n}) }[/math] far away from the starting point after [math]\displaystyle{ n }[/math] steps is bounded by [math]\displaystyle{ o(1) }[/math].

Generalization

Azuma's inequality can be generalized to a martingale with respect another sequence.

Azuma's Inequality (general version)
Let [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] be a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |Y_{k}-Y_{k-1}|\le c_k, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|Y_n-Y_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]

The Proof of Azuma's Inueqality

We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence [math]\displaystyle{ Y_i }[/math] conditioning on sequence [math]\displaystyle{ X_i }[/math].

The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.

In order to bound the probability of [math]\displaystyle{ |X_n-X_0|\ge t }[/math], we first bound the upper tail [math]\displaystyle{ \Pr[X_n-X_0\ge t] }[/math]. The bound of the lower tail can be symmetrically proved with the [math]\displaystyle{ X_i }[/math] replaced by [math]\displaystyle{ -X_i }[/math].

Represent the deviation as the sum of differences

We define the martingale difference sequence: for [math]\displaystyle{ i\ge 1 }[/math], let

[math]\displaystyle{ Y_i=X_i-X_{i-1}. }[/math]

It holds that

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}] &=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=X_{i-1}-X_{i-1}\\ &=0. \end{align} }[/math]

The second to the last equation is due to the fact that [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale and the definition of conditional expectation.

Let [math]\displaystyle{ Z_n }[/math] be the accumulated differences

[math]\displaystyle{ Z_n=\sum_{i=1}^n Y_i. }[/math]

The deviation [math]\displaystyle{ (X_n-X_0) }[/math] can be computed by the accumulated differences:

[math]\displaystyle{ \begin{align} X_n-X_0 &=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\ &=\sum_{i=1}^n Y_i\\ &=Z_n. \end{align} }[/math]

We then only need to upper bound the probability of the event [math]\displaystyle{ Z_n\ge t }[/math].

Apply Markov's inequality to the moment generating function

The event [math]\displaystyle{ Z_n\ge t }[/math] is equivalent to that [math]\displaystyle{ e^{\lambda Z_n}\ge e^{\lambda t} }[/math] for [math]\displaystyle{ \lambda\gt 0 }[/math]. Apply Markov's inequality, we have

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\ &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}. \end{align} }[/math]

This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math].

Bound the moment generating functions

The moment generating function

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right] \end{align} }[/math]

The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.

We then upper bound the [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right] }[/math] by a constant. To do so, we need the following technical lemma which is proved by the convexity of [math]\displaystyle{ e^{\lambda Y_n} }[/math].

Lemma
Let [math]\displaystyle{ X }[/math] be a random variable such that [math]\displaystyle{ \mathbf{E}[X]=0 }[/math] and [math]\displaystyle{ |X|\le c }[/math]. Then for [math]\displaystyle{ \lambda\gt 0 }[/math],
[math]\displaystyle{ \mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}. }[/math]
Proof.
Observe that for [math]\displaystyle{ \lambda\gt 0 }[/math], the function [math]\displaystyle{ e^{\lambda X} }[/math] of the variable [math]\displaystyle{ X }[/math] is convex in the interval [math]\displaystyle{ [-c,c] }[/math]. We draw a line between the two endpoints points [math]\displaystyle{ (-c, e^{-\lambda c}) }[/math] and [math]\displaystyle{ (c, e^{\lambda c}) }[/math]. The curve of [math]\displaystyle{ e^{\lambda X} }[/math] lies entirely below this line. Thus,
[math]\displaystyle{ \begin{align} e^{\lambda X} &\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}). \end{align} }[/math]

Since [math]\displaystyle{ \mathbf{E}[X]=0 }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}[e^{\lambda X}] &\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}. \end{align} }[/math]

By expanding both sides as Taylor's series, it can be verified that [math]\displaystyle{ \frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2} }[/math].

[math]\displaystyle{ \square }[/math]

Apply the above lemma to the random variable

[math]\displaystyle{ (Y_n \mid X_0,\ldots,X_{n-1}) }[/math]

We have already shown that its expectation [math]\displaystyle{ \mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0, }[/math] and by the bounded difference condition of Azuma's inequality, we have [math]\displaystyle{ |Y_n|=|(X_n-X_{n-1})|\le c_n. }[/math] Thus, due to the above lemma, it holds that

[math]\displaystyle{ \mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}. }[/math]

Back to our analysis of the expectation [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\ &= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] . \end{align} }[/math]

Apply the same analysis to [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_{n-1}}\right] }[/math], we can solve the above recursion by

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\ &= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right). \end{align} }[/math]

Go back to the Markov's inequality,

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right). \end{align} }[/math]

We then only need to choose a proper [math]\displaystyle{ \lambda\gt 0 }[/math].

Optimization

By choosing [math]\displaystyle{ \lambda=\frac{t}{\sum_{k=1}^n c_k^2} }[/math], we have that

[math]\displaystyle{ \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). }[/math]

Thus, the probability

[math]\displaystyle{ \begin{align} \Pr\left[X_n-X_0\ge t\right] &=\Pr\left[Z_n\ge t\right]\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\ &= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). \end{align} }[/math]

The upper tail of Azuma's inequality is proved. By replacing [math]\displaystyle{ X_i }[/math] by [math]\displaystyle{ -X_i }[/math], the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function [math]\displaystyle{ f }[/math] with respect to a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] is defined by
[math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n. }[/math]
In particular, [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math].

The Doob sequence of a function defines a martingale. That is

[math]\displaystyle{ \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1}, }[/math]

for any [math]\displaystyle{ 0\le i\le n }[/math].

To prove this claim, we recall the definition that [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}] }[/math], thus,

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}] &=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\ &=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\ &=Y_{i-1}, \end{align} }[/math]

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The Doob sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] represents a sequence of refined estimates of the value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math], gradually using more information on the values of the random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The first element [math]\displaystyle{ Y_0 }[/math] is just the expectation of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math]. Element [math]\displaystyle{ Y_i }[/math] is the expected value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] when the values of [math]\displaystyle{ X_1,\ldots,X_{i} }[/math] are known, and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math] when [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] is fully determined by [math]\displaystyle{ X_1,\ldots,X_n }[/math].

The following two Doob martingales arise in evaluating the parameters of random graphs.

edge exposure martingale
Let [math]\displaystyle{ G }[/math] be a random graph on [math]\displaystyle{ n }[/math] vertices. Let [math]\displaystyle{ f }[/math] be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that [math]\displaystyle{ m={n\choose 2} }[/math]. Fix an arbitrary numbering of potential edges between the [math]\displaystyle{ n }[/math] vertices, and denote the edges as [math]\displaystyle{ e_1,\ldots,e_m }[/math]. Let
[math]\displaystyle{ X_i=\begin{cases} 1& \mbox{if }e_i\in G,\\ 0& \mbox{otherwise}. \end{cases} }[/math]
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,m }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the edge exposure martingale.
vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is [math]\displaystyle{ [n] }[/math]. Let [math]\displaystyle{ X_i }[/math] be the subgraph of [math]\displaystyle{ G }[/math] induced by the vertex set [math]\displaystyle{ [i] }[/math], i.e. the first [math]\displaystyle{ i }[/math] vertices.
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,n }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the vertex exposure martingale.

Chromatic number

The random graph [math]\displaystyle{ G(n,p) }[/math] is the graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ [n] }[/math], obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability [math]\displaystyle{ p }[/math]. We denote [math]\displaystyle{ G\sim G(n,p) }[/math] if [math]\displaystyle{ G }[/math] is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let [math]\displaystyle{ G\sim G(n,p) }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}. \end{align} }[/math]
Proof.
Consider the vertex exposure martingale
[math]\displaystyle{ Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i] }[/math]

where each [math]\displaystyle{ X_k }[/math] exposes the induced subgraph of [math]\displaystyle{ G }[/math] on vertex set [math]\displaystyle{ [k] }[/math]. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

[math]\displaystyle{ |Y_i-Y_{i-1}|\le 1 }[/math]

is satisfied. Now apply the Azuma's inequality for the martingale [math]\displaystyle{ Y_1,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots,X_n }[/math].

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ t=\omega(1) }[/math], the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math], where [math]\displaystyle{ X_1,\ldots,X_n }[/math] are independent random variables with [math]\displaystyle{ a_i\le X_i\le b_i }[/math] for each [math]\displaystyle{ 1\le i\le n }[/math]. Let [math]\displaystyle{ \mu=\mathbf{E}[X] }[/math]. Then
[math]\displaystyle{ \Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right). }[/math]
Proof.
Define the Doob martingale sequence [math]\displaystyle{ Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right] }[/math]. Obviously [math]\displaystyle{ Y_0=\mu }[/math] and [math]\displaystyle{ Y_n=X }[/math].
[math]\displaystyle{ \begin{align} |Y_i-Y_{i-1}| &= \left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\ &=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\ &=\left|X_i-\mathbf{E}[X_{i}]\right|\\ &\le b_i-a_i \end{align} }[/math]

Apply Azuma's inequality for the martingale [math]\displaystyle{ Y_0,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots, X_n }[/math], the Hoeffding's inequality is proved.

[math]\displaystyle{ \square }[/math]

The Bounded Difference Method

Combining Azuma's inequality with the construction of Doob martingales, we have the powerful Bounded Difference Method for concentration of measures.

For arbitrary random variables

Given a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] and a function [math]\displaystyle{ f }[/math]. The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).

Theorem (Method of averaged bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be arbitrary random variables and let [math]\displaystyle{ f }[/math] be a function of [math]\displaystyle{ X_0,X_1,\ldots, X_n }[/math] satisfying that, for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ |\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
Define the Doob Martingale sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] by setting [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and, for [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i] }[/math]. Then the above theorem is a restatement of the Azuma's inequality holding for [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math].
[math]\displaystyle{ \square }[/math]

For independent random variables

The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.

Definition (Lipschitz condition)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition, if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1. \end{align} }[/math]

In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.

The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.

Definition (Lipschitz condition, general version)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i. \end{align} }[/math]

The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.

Corollary (Method of bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be [math]\displaystyle{ n }[/math] independent random variables and let [math]\displaystyle{ f }[/math] be a function satisfying the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
For convenience, we denote that [math]\displaystyle{ \boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j) }[/math] for any [math]\displaystyle{ 1\le i\le j\le n }[/math].

We first show that the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], implies another condition called the averaged Lipschitz condition (ALC): for any [math]\displaystyle{ a_i,b_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i. }[/math]

And this condition implies the averaged bounded difference condition: for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i. }[/math]

Then by applying the method of averaged bounded differences, the corollary can be proved.

For any [math]\displaystyle{ a }[/math], by the law of total expectation,

[math]\displaystyle{ \begin{align} &\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\ &= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]. \end{align} }[/math]

Let [math]\displaystyle{ a=a_i }[/math] and [math]\displaystyle{ b_i }[/math], and take the diference. Then

[math]\displaystyle{ \begin{align} &\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\ &=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\ &\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\ &\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\ &=c_i. \end{align} }[/math]

Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.

By the law of total expectation,

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}]. }[/math]

We can trivially write [math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right] }[/math] as

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]. }[/math]

Hence, the difference is

[math]\displaystyle{ \begin{align} &\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\ &=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\ &\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\ &\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\ &=c_i. \end{align} }[/math]

The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.

[math]\displaystyle{ \square }[/math]

Applications

Occupancy problem

Throwing [math]\displaystyle{ m }[/math] balls uniformly and independently at random to [math]\displaystyle{ n }[/math] bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.

This problem can be described equivalently as follows. Let [math]\displaystyle{ f:[m]\rightarrow[n] }[/math] be a uniform random function from [math]\displaystyle{ [m]\rightarrow[n] }[/math]. We ask for the number of [math]\displaystyle{ i\in[n] }[/math] that [math]\displaystyle{ f^{-1}(i) }[/math] is empty.

For any [math]\displaystyle{ i\in[n] }[/math], let [math]\displaystyle{ X_i }[/math] indicate the emptiness of bin [math]\displaystyle{ i }[/math]. Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math] be the number of empty bins.

[math]\displaystyle{ \mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m. }[/math]

By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m. }[/math]

We want to know how [math]\displaystyle{ X }[/math] deviates from this expectation. The complication here is that [math]\displaystyle{ X_i }[/math] are not independent. So we alternatively look at a sequence of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], where [math]\displaystyle{ Y_j\in[n] }[/math] represents the bin into which the [math]\displaystyle{ j }[/math]th ball falls. Clearly [math]\displaystyle{ X }[/math] is function of [math]\displaystyle{ Y_1,\ldots, Y_m }[/math].

We than observe that changing the value of any [math]\displaystyle{ Y_i }[/math] can change the value of [math]\displaystyle{ X }[/math] by at most 1, because one ball can affect the emptiness of at most one bin. Thus as a function of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that

[math]\displaystyle{ \Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2} }[/math]

Thus, for sufficiently large [math]\displaystyle{ n }[/math] and [math]\displaystyle{ m }[/math], the number of empty bins is tightly concentrated around [math]\displaystyle{ n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}} }[/math]

Pattern Matching

Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots,X_n) }[/math] be a sequence of characters chosen independently and uniformly at random from an alphabet [math]\displaystyle{ \Sigma }[/math], where [math]\displaystyle{ m=|\Sigma| }[/math]. Let [math]\displaystyle{ \pi\in\Sigma^k }[/math] be an arbitrarily fixed string of [math]\displaystyle{ k }[/math] characters from [math]\displaystyle{ \Sigma }[/math], called a pattern. Let [math]\displaystyle{ Y }[/math] be the number of occurrences of the pattern [math]\displaystyle{ \pi }[/math] as a substring of the random string [math]\displaystyle{ X }[/math].

By the linearity of expectation, it is obvious that

[math]\displaystyle{ \mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k. }[/math]

We now look at the concentration of [math]\displaystyle{ Y }[/math]. The complication again lies in the dependencies between the matches. Yet we will see that [math]\displaystyle{ Y }[/math] is well tightly concentrated around its expectation if [math]\displaystyle{ k }[/math] is relatively small compared to [math]\displaystyle{ n }[/math].

For a fixed pattern [math]\displaystyle{ \pi }[/math], the random variable [math]\displaystyle{ Y }[/math] is a function of the independent random variables [math]\displaystyle{ (X_1,\ldots,X_n) }[/math]. Any character [math]\displaystyle{ X_i }[/math] participates in no more than [math]\displaystyle{ k }[/math] matches, thus changing the value of any [math]\displaystyle{ X_i }[/math] can affect the value of [math]\displaystyle{ Y }[/math] by at most [math]\displaystyle{ k }[/math]. [math]\displaystyle{ Y }[/math] satisfies the Lipschitz condition with constant [math]\displaystyle{ k }[/math]. Apply the method of bounded differences,

[math]\displaystyle{ \Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge tk\sqrt{n}\right]\le 2e^{-t^2/2} }[/math]

Combining unit vectors

Let [math]\displaystyle{ u_1,\ldots,u_n }[/math] be [math]\displaystyle{ n }[/math] unit vectors from some normed space. That is, [math]\displaystyle{ \|u_i\|=1 }[/math] for any [math]\displaystyle{ 1\le i\le n }[/math], where [math]\displaystyle{ \|\cdot\| }[/math] denote the vector norm (e.g. [math]\displaystyle{ \ell_1,\ell_2,\ell_\infty }[/math]) of the space.

Let [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n\in\{-1,+1\} }[/math] be independently chosen and [math]\displaystyle{ \Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2 }[/math].

Let

[math]\displaystyle{ v=\epsilon_1u_1+\cdots+\epsilon_nu_n, }[/math]

and

[math]\displaystyle{ X=\|v\|. }[/math]

This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable [math]\displaystyle{ X }[/math] is well concentrated around its mean.

[math]\displaystyle{ X }[/math] is a function of independent random variables [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n }[/math]. By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector [math]\displaystyle{ u_i }[/math] can only change the value of [math]\displaystyle{ X }[/math] for at most 2, thus [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:

[math]\displaystyle{ \Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}. }[/math]