组合数学 (Fall 2017)/Extremal set theory and 高级算法 (Fall 2018)/Concentration of measure: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "== Sunflowers == An set system is a '''sunflower''' if all its member sets intersect at the same set of elements. {{Theorem|Definition (sunflower)| : A set family <math>\mathc...")
 
imported>Etone
(Created page with "= Conditional Expectations = The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by :<math> \mat...")
 
Line 1: Line 1:
== Sunflowers ==
= Conditional Expectations =
An set system is a '''sunflower''' if all its member sets intersect at the same set of elements.
The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by
{{Theorem|Definition (sunflower)|
:<math>
: A set family <math>\mathcal{F}\subseteq 2^X</math> is a '''sunflower''' of size <math>r</math> with a '''core''' <math>C\subseteq X</math> if
\mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}].
::<math>\forall S,T\in\mathcal{F}</math> that <math>S\neq T</math>, <math>S\cap T=C</math>.
</math>
}}
In particular, if the event <math>\mathcal{E}</math> is <math>X=a</math>, the conditional expectation
Note that we do not require the core to be nonempty, thus a family of disjoint sets is also a sunflower (with the core <math>\emptyset</math>).
:<math>
\mathbf{E}[Y\mid X=a]
</math>
defines a function
:<math>
f(a)=\mathbf{E}[Y\mid X=a].
</math>
Thus, <math>\mathbf{E}[Y\mid X]</math> can be regarded as a random variable <math>f(X)</math>.


The next result due to Erdős and Rado, called the sunflower lemma, is a famous result in extremal set theory, and has some important applications in Boolean circuit complexity.
;Example
{{Theorem|Sunflower Lemma (Erdős-Rado)|
:Suppose that we uniformly sample a human from all human beings. Let <math>Y</math> be his/her height, and let <math>X</math> be the country where he/she is from. For any country <math>a</math>, <math>\mathbf{E}[Y\mid X=a]</math> gives the average height of that country. And <math>\mathbf{E}[Y\mid X]</math> is the random variable which can be defined in either ways:
:Let <math>\mathcal{F}\subseteq {X\choose k}</math>. If <math>|\mathcal{F}|>k!(r-1)^k</math>, then <math>\mathcal{F}</math> contains a sunflower of size  <math>r</math>.
:* We choose a human uniformly at random from all human beings, and <math>\mathbf{E}[Y\mid X]</math> is the average height of the country where he/she comes from.
}}
:* We choose a country at random with a probability ''proportional to its population'', and <math>\mathbf{E}[Y\mid X]</math> is the average height of the chosen country.
{{Proof|
We proceed by induction on <math>k</math>. For <math>k=1</math>, <math>\mathcal{F}\subseteq{X\choose 1}</math>, thus all sets in <math>\mathcal{F}</math> are disjoint. And since <math>|\mathcal{F}|>r-1</math>, we can choose <math>r</math> of these sets and form a sunflower.


Now let <math>k\ge 2</math> and assume the lemma holds for all smaller <math>k</math>. Take a maximal family <math>\mathcal{G}\subseteq \mathcal{F}</math> whose members are disjoint, i.e. for any <math>S,T\in \mathcal{G}</math> that <math>S\neq T</math>, <math>S\cap T=\emptyset</math>.
The following proposition states some fundamental facts about conditional expectation.


If <math>|\mathcal{G}|\ge r</math>, then <math>\mathcal{G}</math> is a sunflower of size at least <math>r</math> and we are done.
{{Theorem
 
|Proposition (fundamental facts about conditional expectation)|
Assume that <math>|\mathcal{G}|\le r-1</math>, and let <math>Y=\bigcup_{S\in\mathcal{G}}S</math>. Then <math>|Y|=k|\mathcal{G}|\le k(r-1)</math> (since all members of <math>\mathcal{G}</math>) are disjoint). We claim that <math>Y</math> intersets all members of <math>\mathcal{F}</math>, since if otherwise, there exists an <math>S\in\mathcal{F}</math> such that <math>S\cap Y=\emptyset</math>, then we can enlarge <math>\mathcal{G}</math> by adding <math>S</math> into <math>\mathcal{G}</math> and still have all members of <math>\mathcal{G}</math> disjoint, which contradicts the assumption that <math>\mathcal{G}</math> is the maximum of such families.
:Let <math>X,Y</math> and <math>Z</math> be arbitrary random variables. Let <math>f</math> and <math>g</math> be arbitrary functions. Then
 
:# <math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>.
By the pigeonhole principle, some elements <math>y\in Y</math> must contained in at least
:# <math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>.
:<math>\frac{|\mathcal{F}|}{|Y|}>\frac{k!(r-1)^k}{k(r-1)}=(k-1)!(r-1)^{k-1}</math>
:# <math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
members of <math>\mathcal{F}</math>. We delete this <math>y</math> from these sets and consider the family
:<math>\mathcal{H}=\{S\setminus\{y\}\mid S\in\mathcal{F}\wedge y\in S\}</math>.
We have <math>\mathcal{H}\subseteq {X\choose k-1}</math> and <math>|\mathcal{H}|>(k-1)!(r-1)^{k-1}</math>, thus by the induction hypothesis, <math>\mathcal{H}</math>contains a sunflower of size <math>r</math>. Adding <math>y</math> to the members of this sunflower, we get the desired sunflower in the original family <math>\mathcal{F}</math>.
}}
}}
The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.


==The Erdős–Ko–Rado Theorem ==
The first equation:
A set family <math>\mathcal{F}\subseteq 2^X</math> is called '''intersecting''', if for any <math>S,T\in\mathcal{F}</math>, <math>S\cap T\neq\emptyset</math>. A natural question of extremal favor is: "how large can an intersecting family be?"
:<math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>
says that there are two ways to compute an average. Suppose again that <math>X</math> is the height of a uniform random human and <math>Y</math> is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.


Assume <math>|X|=n</math>. When <math>n<2k</math>, every pair of <math>k</math>-subsets of <math>X</math> intersects. So the non-trivial case is when <math>n\ge 2k</math>. The famous Erdős–Ko–Rado theorem gives the largest possible cardinality of a nontrivially intersecting family.  
The second equation:
:<math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>
is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height <math>X</math> and the country <math>Y</math>, let <math>Z</math> be the gender of the individual. Thus, <math>\mathbf{E}[X\mid Z]</math> is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.


According to Erdős, the theorem itself was proved in 1938, but was not published until 23 years later.
The third equation:
{{Theorem|Erdős–Ko–Rado theorem (proved in 1938, published in 1961)|
:<math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
:Let <math>\mathcal{F}\subseteq {X\choose k}</math> where <math>|X|=n</math> and <math>n\ge 2k</math>. If <math>\mathcal{F}</math> is intersecting, then
looks obscure at the first glance, especially when considering that <math>X</math> and <math>Y</math> are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any <math>X=a</math>, the function value <math>f(X)=f(a)</math> becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value <math>X=a</math>,
::<math>|\mathcal{F}|\le{n-1\choose k-1}</math>.
:<math>
}}
\mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a].
</math>


=== Katona's proof ===
The proposition holds in more general cases when <math>X, Y</math> and <math>Z</math> are a sequence of random variables.
We first introduce a proof discovered by Katona in 1972. The proof uses double counting.


Let <math>\pi</math> be a '''cyclic permutation''' of <math>X</math>, that is, we think of assigning <math>X</math> in a circle and ignore the rotations of the circle. It is easy to see that there are <math>(n-1)!</math> cyclic permutations of an <math>n</math>-set (each cyclic permutation corresponds to <math>n</math> permutations).
= Martingales =
Let
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after <math>n</math> losses, if the <math>(n+1)</math>th bet wins, then it gives a net profit of
:<math>\mathcal{G}_\pi=\{\{\pi_{(i+j)\bmod n}\mid j\in[k]\}\mid i\in [n]\}</math>.
:<math>
2^n-\sum_{i=1}^{n}2^{i-1}=1,
</math>
which is a positive number.


The next lemma states the following observation: in a circle of <math>n</math> points, supposed <math>n\ge 2k</math>, there can be at most <math>k</math> arcs, each consisting of <math>k</math> points, such that every pair of arcs share at least one point.
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.
{{Theorem|Lemma|
:Let <math>\mathcal{F}\subseteq {X\choose k}</math> where <math>|X|=n</math> and <math>n\ge 2k</math>. If <math>\mathcal{F}</math> is intersecting, then for any cyclic permutation <math>\pi</math> of <math>X</math>, it holds that <math>|\mathcal{G}_\pi\cap\mathcal{F}|\le k</math>.
}}
{{Proof|
Fix a cyclic permutation <math>\pi</math> of <math>X</math>. Let <math>A_i=\{\pi_{(i+j+n)\bmod n}\mid j\in[k]\}</math>. Then <math>\mathcal{G}_\pi</math> can be written as <math>\mathcal{G}_\pi=\{A_i\mid i\in [n]\}</math>.  


Suppose that <math>A_t\in\mathcal{F}</math>. Since <math>\mathcal{F}</math> is intersecting, the only sets <math>A_i</math> that can be in <math>\mathcal{F}</math> other than <math>A_t</math> itself are the <math>2k-2</math> sets <math>A_i</math> with <math>t-(k-1)\le i\le t+k-1, i\neq t</math>. We partition these sets into <math>k-1</math> pairs <math>\{A_i,A_{i+k}\}</math>, where <math>t-(k-1)\le i\le t-1</math>.
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables <math>X_0,X_1,\ldots,</math>, where <math>X_0</math> is his initial capital, and <math>X_i</math> represents his capital after the <math>i</math>th betting. Up to different betting strategies, <math>X_i</math> can be arbitrarily dependent on <math>X_0,\ldots,X_{i-1}</math>. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables <math>X_0,\ldots,X_{i-1}</math>, we will expect no change in the value of the present variable <math>X_{i}</math> on average. Random variables satisfying this property is called a '''martingale''' sequence.


Note that for <math>n\ge 2k</math>, it holds that <math>A_i\cap C_{i+k}=\emptyset</math>. Since <math>\mathcal{F}</math> is intersecting, <math>\mathcal{F}</math> can contain at most one set of each such pair. The lemma follows.
{{Theorem
|Definition (martingale)|
:A sequence of random variables <math>X_0,X_1,\ldots</math> is a '''martingale''' if for all <math>i> 0</math>,
:: <math>\begin{align}
\mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}.
\end{align}</math>
}}
}}


The Katona's proof of Erdős–Ko–Rado theorem is done by counting in two ways the pairs of member <math>S</math> of <math>\mathcal{F}</math> and cyclic permutation <math>\pi</math> which contain <math>S</math> as a continuous path on the circle (i.e., an arc).
==Examples ==
;coin flips
:A fair coin is flipped for a number of times. Let <math>Z_j\in\{-1,1\}</math> denote the outcome of the <math>j</math>th flip. Let
::<math>X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j</math>.
:The random variables <math>X_0,X_1,\ldots</math> defines a martingale.
;Proof
:We first observe that <math>\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}]</math>, which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the '''Markov property''' in statistic processes.
::<math>
\begin{align}
\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]
&= \mathbf{E}[X_i\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\
&= X_{i-1}
\end{align}
</math>


{{Prooftitle|Katona's proof of Erdős–Ko–Rado theorem|(double counting)
;Polya's urn scheme
Let  
: Consider an urn (just a container) that initially contains <math>b</math> balck balls and <math>w</math> white balls. At each step, we uniformly select a ball from the urn, and replace the ball with <math>c</math> balls of the same color. Let <math>X_0=b/(b+w)</math>, and <math>X_i</math> be the fraction of black balls in the urn after the <math>i</math>th step. The sequence <math>X_0,X_1,\ldots</math> is a martingale.
:<math>\mathcal{R}=\{(S,\pi)\mid \pi \text{ is a cyclic permutation of }X, \text{and }S\in\mathcal{F}\cap\mathcal{G}_\pi\}</math>.
We count <math>\mathcal{R}</math> in two ways.


First, due to the lemma, <math>|\mathcal{F}\cap\mathcal{G}_\pi|\le k</math> for any cyclic permutation <math>\pi</math>. There are <math>(n-1)!</math> cyclic permutations in total. Thus,
;edge exposure in a random graph
:<math>|\mathcal{R}|=\sum_{\text{cyclic }\pi}|\mathcal{F}\cap\mathcal{G}_\pi|\le k(n-1)!</math>.
:Consider a '''random graph''' <math>G</math> generated as follows. Let <math>[n]</math> be the set of vertices, and let <math>[m]={[n]\choose 2}</math> be the set of all possible edges. For convenience, we enumerate these potential edges by <math>e_1,\ldots, e_m</math>. For each potential edge <math>e_j</math>, we independently flip a fair coin to decide whether the edge <math>e_j</math> appears in <math>G</math>. Let <math>I_j</math> be the random variable that indicates whether <math>e_j\in G</math>. We are interested in some graph-theoretical parameter, say [http://mathworld.wolfram.com/ChromaticNumber.html chromatic number], of the random graph <math>G</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Let <math>X_0=\mathbf{E}[\chi(G)]</math>, and for each <math>i\ge 1</math>, let <math>X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}]</math>, namely, the expected chromatic number of the random graph after fixing the first <math>i</math> edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random graph.


Next, for each <math>S\in\mathcal{F}</math>, the number of cyclic permutations <math>\pi</math> in which <math>S</math> is continuous is <math>|S|!(n-|S|)!=k!(n-k)!</math>. Thus,
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.
:<math>|\mathcal{R}|=\sum_{S\in\mathcal{F}}k!(n-k)!=|\mathcal{F}|k!(n-k)!</math>.


Altogether, we have
== Generalizations ==
:<math>|\mathcal{F}|\le\frac{k(n-1)!}{k!(n-k)!}=\frac{(n-1)!}{(k-1)!(n-k)!}={n-1\choose k-1}</math>.
The martingale can be generalized to be with respect to another sequence of random variables.
{{Theorem
|Definition (martingale, general version)|
:A sequence of random variables <math>Y_0,Y_1,\ldots</math> is a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> if, for all <math>i\ge 0</math>, the following conditions hold:
:* <math>Y_i</math> is a function of <math>X_0,X_1,\ldots,X_i</math>;
:* <math>\begin{align}
\mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}.
\end{align}</math>
}}
}}
Therefore, a sequence <math>X_0,X_1,\ldots</math> is a martingale if it is a martingale with respect to itself.


=== Erdős' shifting technique ===
The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.
We now introduce the original proof of the Erdős–Ko–Rado theorem, which uses a technique called '''shifting''' (originally called '''compression''').


Without loss of generality, we assume <math>X=[n]</math>, and restate the Erdős–Ko–Rado theorem as follows.
=Azuma's Inequality=
{{Theorem|Erdős–Ko–Rado theorem|
:Let <math>\mathcal{F}\subseteq {[n]\choose k}</math> and <math>n\ge 2k</math>. If <math>\mathcal{F}</math> is intersecting, then <math>|\mathcal{F}|\le{n-1\choose k-1}</math>.
}}


We define a '''shift operator''' for the set family.
We introduce a martingale tail inequality, called Azuma's inequality.
{{Theorem|Definition (shift operator)|
: Assume <math>\mathcal{F}\subseteq 2^{[n]}</math>, and <math>0\le i<j\le n-1</math>. Define the '''<math>(i,j)</math>-shift''' <math>S_{ij}</math> as an operator on <math>\mathcal{F}</math> as follows:
:*for each <math>T\in\mathcal{F}</math>, write <math>T_{ij}=(T\setminus\{j\})\cup\{i\} </math>, and let
::<math>S_{ij}(T)=
\begin{cases}
T_{ij} & \mbox{if }j\in T, i\not\in T, \mbox{ and }T_{ij} \not\in\mathcal{F},\\
T & \mbox{otherwise;}
\end{cases}</math>
:* let <math>S_{ij}(\mathcal{F})=\{S_{ij}(T)\mid T\in \mathcal{F}\}</math>.
}}


It is easy to verify the following propositions of shifts.
{{Theorem
 
|Azuma's Inequality|
{{Theorem|Proposition|
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
# <math>|S_{ij}(T)|=|T|\,</math> and <math>|S_{ij}(\mathcal{F})|=\mathcal{F}</math>;
::<math>
# if <math>\mathcal{F}</math> is intersecting, then so is <math>S_{ij}(\mathcal{F})</math>.
|X_{k}-X_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}
}}
{{Proof|
Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.
(1) is quite obvious. Now we prove (2).


Consider any <math>A,B\in\mathcal{F}</math>. If <math>A\cap B</math> includes any element other than <math>j</math> then <math>A</math> and <math>B</math> are still intersecting after <math>(i,j)</math>-shift. Thus without loss of generality, we may consider the only unsafe case where <math>A\cap B=\{j\}</math>, and <math>B</math> is successfully shifted to <math>B_{ij}=(B\setminus\{j\})\cup\{i\}</math> but <math>A</math> fails to shift to <math>A_{ij}=(A\setminus\{j\})\cup\{i\}</math>.
Second, the condition that
:<math>
|X_{k}-X_{k-1}|\le c_k
</math>
is central to the proof. This condition is sometimes called the '''bounded difference condition'''. If we think of the martingale <math>X_0,X_1,\ldots</math> as a process evolving through time, where <math>X_i</math> gives some measurement at time <math>i</math>, the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.


Since <math>B</math> is successfully shifted to <math>B_{ij}</math>, we know that it must hold that <math>i\not\in B</math> and <math>B_{ij}\not\in\mathcal{F}</math>. And the only two reasons for which <math>A</math> may fail to shift to <math>A_{ij}</math> are: (1) <math>i\in A</math> and (2) <math>i\not\in A</math> but <math>A_{ij}\in\mathcal{F}</math>.
A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.  


Case.1: <math>i\in A</math>. In this case, since <math>i\in B_{ij}</math>, we have <math>S_{ij}(A)\cap S_{ij}(B)=A\cap B_{ij}=\{i\}</math>, i.e. <math>B</math> is still intersecting with <math>A</math> after shifting.
{{Theorem
 
|Corollary|
Case.2: <math>i\not\in A</math> but <math>A_{ij}\in\mathcal{F}</math>. In this case, it is easy to verify that <math>A_{ij}\cap B=(A\cap B)\setminus\{j\}=\emptyset</math>. Recall that we assume <math>A_{ij}\in\mathcal{F}</math>. This contradicts to that <math>\mathcal{F}</math> is intersecting.
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
 
::<math>
In conclusion, in all cases, <math>S_{ij}(\mathcal{F})</math> remains intersecting.
|X_{k}-X_{k-1}|\le c,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}.
\end{align}</math>
}}
}}


Repeatedly applying <math>S_{ij}(\mathcal{F})</math> for any <math>0\le i<j\le n-1</math>, since we only replace elements by smaller elements, eventually <math>\mathcal{F}</math> will stop changing, that is, <math>S_{ij}(\mathcal{F})=\mathcal{F}</math> for all <math>0\le i<j\le n-1</math>. We call such an <math>\mathcal{F}</math> '''shifted'''.
This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates <math>\omega(\sqrt{n})</math> far away from the starting point after <math>n</math> steps is bounded by <math>o(1)</math>.


The idea behind the shifting technique is very natural: by applying shifting, all intersecting families are transformed to some ''special forms'', and we only need to prove the theorem for these special form of intersecting families.
=== Generalization ===


{{Prooftitle|Proof of Erdős-Ko-Rado theorem| (The original proof of Erdős-Ko-Rado by shifting)
Azuma's inequality can be generalized to a martingale with respect another sequence.
By the above lemma, it is sufficient to prove the Erdős-Ko-Rado theorem holds for shifted <math>\mathcal{F}</math>. We assume that <math>\mathcal{F}</math> is shifted.
{{Theorem
|Azuma's Inequality (general version)|
:Let <math>Y_0,Y_1,\ldots</math> be a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> such that, for all <math>k\ge 1</math>,
::<math>
|Y_{k}-Y_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|Y_n-Y_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}


First, it is trivial to see that the theorem holds for <math>k=1</math> (no matter whether shifted).
=== The Proof of Azuma's Inueqality===
We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence <math>Y_i</math> conditioning on sequence <math>X_i</math>.  


Next, we show that the theorem holds when <math>n=2k</math>  (no matter whether shifted). For any <math>S\in{X\choose k}</math>, both <math>S</math> and <math>X\setminus S</math> are in <math>{X\choose k}</math>, but at most one of them can be in <math>\mathcal{F}</math>. Thus,
The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.
:<math>|\mathcal{F}|\le\frac{1}{2}{n\choose k}=\frac{n!}{2k!(n-k)!}=\frac{(n-1)!}{(k-1)!(n-k)!}={n-1\choose k-1}</math>.


We then apply the induction on <math>n</math>. For <math>n> 2k</math>, the induction hypothesis is stated as:
In order to bound the probability of <math>|X_n-X_0|\ge t</math>, we first bound the upper tail <math>\Pr[X_n-X_0\ge t]</math>. The bound of the lower tail can be symmetrically proved with the <math>X_i</math> replaced by <math>-X_i</math>.
* the Erdős-Ko-Rado theorem holds for any smaller <math>n</math>.
Define
:<math>\mathcal{F}_0=\{S\in\mathcal{F}\mid n\not\in S\}</math> and <math>\mathcal{F}_1=\{S\in\mathcal{F}\mid n\in S\}</math>.
Clearly, <math>\mathcal{F}_0\subseteq{[n-1]\choose k}</math> and <math>\mathcal{F}_0</math> is intersecting. Due to the induction hypothesis, <math>|\mathcal{F}_0|\le{n-2\choose k-1}</math>.  


In order to apply the induction, we let
==== Represent the deviation as the sum of differences ====
:<math>\mathcal{F}_1'=\{S\setminus\{n\}\mid S\in\mathcal{F}_1\}</math>.
We define the '''martingale difference sequence''': for <math>i\ge 1</math>, let
Clearly, <math>\mathcal{F}_1'\subseteq{[n-1]\choose k-1}</math>. If only it is also intersecting, we can apply the induction hypothesis, and indeed it is. To see this, by contradiction we assume that <math>\mathcal{F}_1'</math> is not intersecting. Then there must exist <math>A,B\in\mathcal{F}</math> such that <math>A\cap B=\{n\}</math>, which means that <math>|A\cup B|\le 2k-1<n-1</math>. Thus, there is some <math>0\le i\le n-1</math> such that <math>i\not\in A\cup B</math>. Since <math>\mathcal{F}</math> is shifted, <math>A_{in}=A\setminus\{n\}\cup\{i\}\in\mathcal{F}</math>. On the other hand it can be verified that <math>A_{in}\cap B=\emptyset</math>, which contradicts that <math>\mathcal{F}</math> is intersecting.  
:<math>
Y_i=X_i-X_{i-1}.
</math>
It holds that
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}]
&=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=X_{i-1}-X_{i-1}\\
&=0.
\end{align}
</math>
The second to the last equation is due to the fact that <math>X_0,X_1,\ldots</math> is a martingale and the definition of conditional expectation.


Thus, <math>\mathcal{F}_1'\subseteq{[n-1]\choose k-1}</math> and <math>\mathcal{F}_1'</math> is intersecting. Due to the induction hypothesis, <math>|\mathcal{F}_1'|\le{n-2\choose k-2}</math>.
Let <math>Z_n</math> be the accumulated differences
:<math>
Z_n=\sum_{i=1}^n Y_i.
</math>


Combining these together,
The deviation <math>(X_n-X_0)</math> can be computed by the accumulated differences:
:<math>|\mathcal{F}|=|\mathcal{F}_0|+|\mathcal{F}_1|=|\mathcal{F}_0|+|\mathcal{F}_1'|\le {n-2\choose k-1}+{n-2\choose k-2}={n-1\choose k-1}</math>.
:<math>
}}
\begin{align}
X_n-X_0
&=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\
&=\sum_{i=1}^n Y_i\\
&=Z_n.
\end{align}
</math>


== Sperner system ==
We then only need to upper bound the probability of the event <math>Z_n\ge t</math>.
A set family <math>\mathcal{F}\subseteq 2^X</math> with the relation <math>\subseteq</math> define a poset. Thus, a '''chain''' is a sequence <math>S_1\subseteq S_2\subseteq\cdots\subseteq S_k</math>.


A set family <math>\mathcal{F}\subseteq 2^X</math> is an '''antichain''' (also called a '''Sperner system''') if for all <math>S,T\in\mathcal{F}</math> that <math>S\neq T</math>, we have <math>S\not\subseteq T</math>.
==== Apply Markov's inequality to the moment generating function ====
The event <math>Z_n\ge t</math> is equivalent to that <math>e^{\lambda Z_n}\ge e^{\lambda t}</math> for <math>\lambda>0</math>. Apply Markov's inequality, we have
:<math>
\begin{align}
\Pr\left[Z_n\ge t\right]
&=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}.
\end{align}
</math>
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>.


The <math>k</math>-uniform <math>{X\choose k}</math> is an antichain. Let <math>n=|X|</math>. The size of <math>{X\choose k}</math> is maximized when <math>k=\lfloor n/2\rfloor</math>. We wonder whether this is also the largest possible size of any antichain <math>\mathcal{F}\subseteq 2^X</math>.
==== Bound the moment generating functions ====
The moment generating function
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]
\end{align}
</math>
The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.


In 1928, Emanuel Sperner proved a theorem saying that it is indeed the largest possible antichain. This result, called Sperner's theorem today, initiated the studies of extremal set theory.
We then upper bound the <math>\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]</math> by a constant. To do so, we need the following technical lemma which is proved by the convexity of <math>e^{\lambda Y_n}</math>.


{{Theorem|Theorem (Sperner 1928)|
{{Theorem
:Let <math>\mathcal{F}\subseteq 2^X</math> where <math>|X|=n</math>. If <math>\mathcal{F}</math> is an antichain, then
|Lemma|
::<math>|\mathcal{F}|\le{n\choose \lfloor n/2\rfloor}</math>.
:Let <math>X</math> be a random variable such that <math>\mathbf{E}[X]=0</math> and <math>|X|\le c</math>. Then for <math>\lambda>0</math>,
::<math>
\mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}.
</math>
}}
}}
{{Proof| Observe that for <math>\lambda>0</math>, the function <math>e^{\lambda X}</math> of the variable <math>X</math> is convex in the interval <math>[-c,c]</math>. We draw a line between the two endpoints points <math>(-c, e^{-\lambda c})</math> and <math>(c, e^{\lambda c})</math>. The curve of <math>e^{\lambda X}</math> lies entirely below this line. Thus,
:<math>
\begin{align}
e^{\lambda X}
&\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}).
\end{align}
</math>


=== First proof (shadows)===
Since <math>\mathbf{E}[X]=0</math>, we have
We first introduce the original proof by Sperner, which uses concepts called '''shadows''' and '''shades''' of set systems.
:<math>
\begin{align}
\mathbf{E}[e^{\lambda X}]
&\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}.
\end{align}
</math>


{{Theorem|Definition|
By expanding both sides as Taylor's series, it can be verified that <math>\frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2}</math>.
:Let <math>|X|=n\,</math> and <math>\mathcal{F}\subseteq {X\choose k}</math>, <math>k<n\,</math>.
:The '''shade''' of <math>\mathcal{F}</math> is defined to be
::<math>\nabla\mathcal{F}=\left\{T\in {X\choose k+1}\,\,\bigg|\,\, \exists S\in\mathcal{F}\mbox{ such that } S\subset T\right\}</math>.
:Thus the shade <math>\nabla\mathcal{F}</math> of <math>\mathcal{F}</math> consists of all subsets of <math>X</math> which can be obtained by adding an element to a set in <math>\mathcal{F}</math>.
:Similarly, the '''shadow''' of <math>\mathcal{F}</math> is defined to be
::<math>\Delta\mathcal{F}=\left\{T\in {X\choose k-1}\,\,\bigg|\,\, \exists S\in\mathcal{F}\mbox{ such that } T\subset S\right\}</math>.
:Thus the shadow <math>\Delta\mathcal{F}</math> of <math>\mathcal{F}</math> consists of all subsets of <math>X</math> which can be obtained by removing an element from a set in <math>\mathcal{F}</math>.
}}
}}


Next lemma bounds the effects of shadows and shades on the sizes of set systems.
Apply the above lemma to the random variable
:<math>
(Y_n \mid X_0,\ldots,X_{n-1})
</math>


{{Theorem|Lemma (Sperner)|
We have already shown that its expectation
:Let <math>|X|=n\,</math> and <math>\mathcal{F}\subseteq {X\choose k}</math>. Then
<math>
::<math>
\mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0,
</math>
and by the bounded difference condition of Azuma's inequality, we have
<math>
|Y_n|=|(X_n-X_{n-1})|\le c_n.
</math>
Thus, due to the above lemma, it holds that
:<math>
\mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}.
</math>
 
Back to our analysis of the expectation <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>, we have
:<math>
\begin{align}
\begin{align}
&|\nabla\mathcal{F}|\ge\frac{n-k}{k+1}|\mathcal{F}| &\text{ if } k<n\\
\mathbf{E}\left[e^{\lambda Z_n}\right]
&|\Delta\mathcal{F}|\ge\frac{k}{n-k+1}|\mathcal{F}| &\text{ if } k>0.
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\
&= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] .
\end{align}
\end{align}
</math>
</math>
}}


{{Proof|
Apply the same analysis to <math>\mathbf{E}\left[e^{\lambda Z_{n-1}}\right]</math>, we can solve the above recursion by
The lemma is proved by double counting. We prove the inequality of <math>|\nabla\mathcal{F}|</math>. Assume that <math>0\le k<n</math>.
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\
&= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right).
\end{align}
</math>


Define
Go back to the Markov's inequality,
:<math>\mathcal{R}=\{(S,T)\mid S\in\mathcal{F}, T\in\nabla\mathcal{F}, S\subset T\}</math>.
:<math>
We estimate <math>|\mathcal{R}|</math> in two ways.
\begin{align}
\Pr\left[Z_n\ge t\right]
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right).
\end{align}
</math>


For each <math>S\in\mathcal{F}</math>, there are <math>n-k</math> different <math>T\in\nabla\mathcal{F}</math> that <math>S\subset T</math>.
We then only need to choose a proper <math>\lambda>0</math>.
:<math>|\mathcal{R}|=(n-k)|\mathcal{F}|</math>.
For each <math>T\in\nabla\mathcal{F}</math>, there are <math>k+1</math> ways to choose an <math>S\subset T</math> with <math>|S|=k</math>, some of which may not be in <math>\mathcal{F}</math>.
:<math>|\mathcal{R}|\le (k+1)|\nabla\mathcal{F}|</math>.


Altogether, we show that <math>|\nabla\mathcal{F}|\ge\frac{n-k}{k+1}|\mathcal{F}|</math>.
==== Optimization ====
By choosing <math>\lambda=\frac{t}{\sum_{k=1}^n c_k^2}</math>, we have that  
:<math>
\exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
</math>
Thus, the probability
:<math>
\begin{align}
\Pr\left[X_n-X_0\ge t\right]
&=\Pr\left[Z_n\ge t\right]\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\
&= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
\end{align}
</math>
The upper tail of Azuma's inequality is proved. By replacing <math>X_i</math> by <math>-X_i</math>, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.
 
=The Doob martingales =
The following definition describes a very general approach for constructing an important type of martingales.


The inequality of <math>|\Delta\mathcal{F}|</math> can be proved in the same way.
{{Theorem
|Definition (The Doob sequence)|
: The Doob sequence of a function <math>f</math> with respect to a sequence of random variables <math>X_1,\ldots,X_n</math> is defined by
::<math>
Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n.
</math>
:In particular, <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and <math>Y_n=f(X_1,\ldots,X_n)</math>.
}}
}}


An immediate corollary of the previous lemma is as follows.
The Doob sequence of a function defines a martingale. That is
::<math>
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1},
</math>
for any <math>0\le i\le n</math>.
 
To prove this claim, we recall the definition that <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]</math>, thus,
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]
&=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\
&=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\
&=Y_{i-1},
\end{align}
</math>
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.


{{Theorem|Proposition 1|
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function <math>f(X_1,\ldots,X_n)</math> of random variables <math>X_1,\ldots,X_n</math>. The Doob sequence <math>Y_0,Y_1,\ldots,Y_n</math> represents a sequence of refined estimates of the value of <math>f(X_1,\ldots,X_n)</math>, gradually using more information on the values of the random variables <math>X_1,\ldots,X_n</math>. The first element <math>Y_0</math> is just the expectation of <math>f(X_1,\ldots,X_n)</math>. Element <math>Y_i</math> is the expected value of <math>f(X_1,\ldots,X_n)</math> when the values of <math>X_1,\ldots,X_{i}</math> are known, and <math>Y_n=f(X_1,\ldots,X_n)</math> when <math>f(X_1,\ldots,X_n)</math> is fully determined by <math>X_1,\ldots,X_n</math>.
:If <math>k\le \frac{n-1}{2}</math>, then <math>|\nabla\mathcal{F}|\ge|\mathcal{F}|</math>.
:If <math>k\ge \frac{n-1}{2}</math>, then <math>|\Delta\mathcal{F}|\ge|\mathcal{F}|</math>.
}}


The idea of Sperner's proof is pretty clear:
The following two Doob martingales arise in evaluating the parameters of random graphs.  
* we "push up" all the sets in <math>\mathcal{F}</math> of size <math><\frac{n-1}{2}</math> replacing them by their shades;
* and also "push down" all the sets in <math>\mathcal{F}</math> of size <math>\ge\frac{n+1}{2}</math> replacing them by their shadows.
Repeat this process we end up with a set system <math>\mathcal{F}\subseteq{X\choose \lfloor n/2\rfloor}</math>. We need to show that this process does not decrease the size of <math>\mathcal{F}</math>.


{{Theorem|Proposition 2|
;edge exposure martingale
:Suppose that <math>\mathcal{F}\subseteq2^X</math> where <math>|X|=n</math>. Let <math>\mathcal{F}_k=\mathcal{F}\cap{X\choose k}</math>. Let <math>k_\min</math> be the smallest <math>k</math> that <math>|\mathcal{F}_k|>0</math>, and let
:Let <math>G</math> be a random graph on <math>n</math> vertices. Let <math>f</math> be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that <math>m={n\choose 2}</math>. Fix an arbitrary numbering of potential edges between the <math>n</math> vertices, and denote the edges as <math>e_1,\ldots,e_m</math>. Let
::<math>
::<math>
\mathcal{F}'=\begin{cases}
X_i=\begin{cases}
\mathcal{F}\setminus\mathcal{F}_{k_\min}\cup \nabla\mathcal{F}_{k_\min} & \mbox{if }k_\min<\frac{n-1}{2},\\
1& \mbox{if }e_i\in G,\\
\mathcal{F} & \mbox{otherwise.}
0& \mbox{otherwise}.
\end{cases}
\end{cases}
</math>
</math>
:Similarly, let <math>k_\max</math> be the largest <math>k</math> that <math>|\mathcal{F}_k|>0</math>, and let
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,m</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
::<math>
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''edge exposure martingale'''.
\mathcal{F}''=\begin{cases}
 
\mathcal{F}\setminus\mathcal{F}_{k_\max}\cup \Delta\mathcal{F}_{k_\max} & \mbox{if }k_\max\ge\frac{n+1}{2},\\
;vertex exposure martingale
\mathcal{F} & \mbox{otherwise.}
: Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is <math>[n]</math>. Let <math>X_i</math> be the subgraph of <math>G</math> induced by the vertex set <math>[i]</math>, i.e. the first <math>i</math> vertices.
\end{cases}
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,n</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''vertex exposure martingale'''.
 
===Chromatic number===
The random graph <math>G(n,p)</math> is the graph on <math>n</math> vertices <math>[n]</math>, obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability <math>p</math>. We denote <math>G\sim G(n,p)</math> if <math>G</math> is generated in this way.
 
{{Theorem
|Theorem [Shamir and Spencer (1987)]|
:Let <math>G\sim G(n,p)</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Then
::<math>\begin{align}
\Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}.
\end{align}</math>
}}
{{Proof| Consider the vertex exposure martingale
:<math>
Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i]
</math>
where each <math>X_k</math> exposes the induced subgraph of <math>G</math> on vertex set <math>[k]</math>. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition
:<math>
|Y_i-Y_{i-1}|\le 1
</math>
</math>
:If <math>\mathcal{F}</math> is an antichain, <math>\mathcal{F}'</math> and <math>\mathcal{F}''</math> are antichains, and we have <math>|\mathcal{F}'|\ge|\mathcal{F}|</math> and <math>|\mathcal{F}''|\ge|\mathcal{F}|</math>.
is satisfied. Now apply the Azuma's inequality for the martingale <math>Y_1,\ldots,Y_n</math> with respect to <math>X_1,\ldots,X_n</math>.
}}
}}
{{Proof|
We show that <math>\mathcal{F}'</math> is an antichain and <math>|\mathcal{F}'|\ge|\mathcal{F}|</math>.


First, observe that <math>\nabla\mathcal{F}_k\cap\mathcal{F}=\emptyset</math>, otherwise <math>\mathcal{F}</math> cannot be an antichain, and due to Proposition 1, <math>|\nabla\mathcal{F}_k|\ge|\mathcal{F}_k|</math> when <math>k\le \frac{n-1}{2}</math>, so <math>|\mathcal{F}'|=|\mathcal{F}|-|\mathcal{F}_k|+|\nabla\mathcal{F}_k|\ge |\mathcal{F}|</math>.  
For <math>t=\omega(1)</math>, the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.  


Now we prove that <math>\mathcal{F}'</math> is an antichain . By contradiction, assume that there are <math>S, T\in \mathcal{F}'</math>, such that <math>S\subset T</math>. One of the <math>S,T</math> must be in <math>\nabla\mathcal{F}_{k_\min}</math>, or otherwise <math>\mathcal{F}</math> cannot be an antichain. Recall that <math>k_\min</math> is the smallest <math>k</math> that <math>|\mathcal{F}_k|>0</math>, thus it must be <math>S\in \nabla\mathcal{F}_{k_\min}</math>, and <math>T\in\mathcal{F}</math>. This implies that there is an <math>R\in \mathcal{F}_{k_\min}\subseteq \mathcal{F}</math> such that <math>R\subset S\subset T</math>, which contradicts that <math>\mathcal{F}</math> is an antichain.
=== Hoeffding's Inequality===
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent ''trials''. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
{{Theorem
|Hoeffding's inequality|
: Let <math>X=\sum_{i=1}^nX_i</math>, where <math>X_1,\ldots,X_n</math> are independent random variables with <math>a_i\le X_i\le b_i</math> for each <math>1\le i\le n</math>. Let <math>\mu=\mathbf{E}[X]</math>. Then
::<math>
\Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right).
</math>
}}
{{Proof| Define the Doob martingale sequence <math>Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right]</math>. Obviously <math>Y_0=\mu</math> and <math>Y_n=X</math>.


The statement for <math>\mathcal{F}''</math> can be proved in the same way.
:<math>
\begin{align}
|Y_i-Y_{i-1}|
&=
\left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\
&=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\
&=\left|X_i-\mathbf{E}[X_{i}]\right|\\
&\le b_i-a_i
\end{align}
</math>
Apply Azuma's inequality for the martingale <math>Y_0,\ldots,Y_n</math> with respect to <math>X_1,\ldots, X_n</math>,  the Hoeffding's inequality is proved.
}}
}}


Applying the above process, we prove the Sperner's theorem.
=The Bounded Difference Method=
{{Prooftitle|Proof of Sperner's theorem | (original proof of Sperner)
Combining Azuma's inequality with the construction of Doob martingales, we have the powerful ''Bounded Difference Method'' for concentration of measures.
Let <math>\mathcal{F}_k=\{S\in\mathcal{F}\mid |S|=k\}</math>, where <math>0\le k\le n</math>.


We change <math>\mathcal{F}</math> as follows:
== For arbitrary random variables ==
* for the smallest <math>k</math> that <math>|\mathcal{F}_k|>0</math>, if <math>k<\frac{n-1}{2}</math>, replace <math>\mathcal{F}_k</math> by <math>\nabla\mathcal{F}_k</math>.
Given a sequence of random variables <math>X_1,\ldots,X_n</math> and a function <math>f</math>.  The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).


Due to Proposition 2, this procedure preserves <math>\mathcal{F}</math> as an antichain and does not decrease <math>|\mathcal{F}|</math>. Repeat this procedure, until <math>|\mathcal{F}_k|=0</math> for all <math>k<\frac{n-1}{2}</math>, that is, there is no member set of <math>\mathcal{F}</math> has size less than <math>\frac{n-1}{2}</math>.
{{Theorem
|Theorem (Method of averaged bounded differences)|
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be arbitrary random variables and let <math>f</math> be a function of <math>X_0,X_1,\ldots, X_n</math> satisfying that, for all <math>1\le i\le n</math>,
::<math>
|\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i,
</math>
:Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
{{Proof| Define the Doob Martingale sequence <math>Y_0,Y_1,\ldots,Y_n</math> by setting <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and, for <math>1\le i\le n</math>, <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i]</math>. Then the above theorem is a restatement of the Azuma's inequality holding for <math>Y_0,Y_1,\ldots,Y_n</math>.
}}


We then define another symmetric procedure:
== For independent random variables ==
* for the largest <math>k</math> that <math>|\mathcal{F}_k|>0</math>, if <math>k\ge\frac{n+1}{2}</math>, replace <math>\mathcal{F}_k</math> by <math>\Delta\mathcal{F}_k</math>.
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.
Also due to Proposition 2, this procedure preserves <math>\mathcal{F}</math> as an antichain and does not decrease <math>|\mathcal{F}|</math>. After repeatedly applying this procedure, <math>|\mathcal{F}_k|=0</math> for all <math>k\ge\frac{n+1}{2}</math>.  


The resulting <math>\mathcal{F}</math> has <math>\mathcal{F}\subseteq{X\choose \lfloor n/2\rfloor}</math>, and since <math>|\mathcal{F}|</math> is never decreased, for the original <math>\mathcal{F}</math>, we have
{{Theorem
:<math>|\mathcal{F}|\le {n\choose \lfloor n/2\rfloor}</math>.
|Definition (Lipschitz condition)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1.
\end{align}</math>
}}
}}
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.


=== Second proof (counting)===
The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
We now introduce an elegant proof due to Lubell. The proof uses a counting argument, and tells more information than just the size of the set system.
{{Theorem
 
|Definition (Lipschitz condition, general version)|
{{Prooftitle|Proof of Sperner's theorem | (Lubell 1966)
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
Let <math>\pi</math> be a permutation of <math>X</math>. We say that an <math>S\subseteq X</math> '''prefixes''' <math>\pi</math>, if <math>S=\{\pi_1,\pi_2,\ldots, \pi_{|S|}\}</math>, that is, <math>S</math> is precisely the set of the first <math>|S|</math> elements in the permutation <math>\pi</math>.
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i.
\end{align}</math>
}}


Fix an <math>S\subseteq X</math>. It is easy to see that the number of permutations <math>\pi</math> of <math>X</math> prefixed by <math>S</math> is <math>|S|!(n-|S|)!</math>.  Also, since <math>\mathcal{F}</math> is an antichain, no permutation <math>\pi</math> of <math>X</math> can be prefixed by more than one members of <math>\mathcal{F}</math>, otherwise one of the member sets must contain the other, which contradicts that <math>\mathcal{F}</math> is an antichain. Thus, the number of permutations <math>\pi</math> prefixed by some <math>S\in\mathcal{F}</math> is
The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
:<math>\sum_{S\in\mathcal{F}}|S|!(n-|S|)!</math>,
{{Theorem
which cannot be larger than the total number of permutations, <math>n!</math>, therefore,
|Corollary (Method of bounded differences)|
:<math>\sum_{S\in\mathcal{F}}|S|!(n-|S|)!\le n!</math>.
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be <math>n</math> '''independent''' random variables and let <math>f</math> be a function satisfying the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>. Then
Dividing both sides by <math>n!</math>, we have
::<math>\begin{align}
:<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}=\sum_{S\in\mathcal{F}}\frac{|S|!(n-|S|)!}{n!}\le 1</math>,
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
where <math>{n\choose |S|}\le {n\choose \lfloor n/2\rfloor}</math>, so
\end{align}</math>
:<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\ge \frac{|\mathcal{F}|}{{n\choose \lfloor n/2\rfloor}}</math>.
Combining this with the above inequality, we prove the Sperner's theorem.
}}
}}


=== The LYM inequality ===
{{Proof| For convenience, we denote that <math>\boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j)</math> for any <math>1\le i\le j\le n</math>.
Lubell's proof proves the following inequality:
:<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\le 1</math>
which is actually stronger than Sperner's original statement that <math>|\mathcal{F}|\le{n\choose \lfloor n/2\rfloor}</math>.


This inequality is independently discovered by Lubell-Yamamoto, Meschalkin, and Bollobás, and is called the LYM inequality today.
We first show that the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, implies another condition called the averaged Lipschitz condition (ALC): for any <math>a_i,b_i</math>, <math>1\le i\le n</math>,
:<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i.
</math>
And this condition implies the averaged bounded difference condition: for all <math>1\le i\le n</math>,
::<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i.
</math>
Then by applying the method of averaged bounded differences, the corollary can be proved.


{{Theorem|Theorem (Lubell, Yamamoto 1954; Meschalkin 1963)|
For any <math>a</math>, by the law of total expectation,
:Let <math>\mathcal{F}\subseteq 2^X</math> where <math>|X|=n</math>. If <math>\mathcal{F}</math> is an antichain, then
:<math>
::<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\le 1</math>.
\begin{align}
}}
&\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\
&= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right].
\end{align}
</math>


In Lubell's counting argument proves the LYM inequality, which implies the Sperner's theorem. Here we give another proof of the LYM inequality by the probabilistic method,  due to Noga Alon.
Let <math>a=a_i</math> and <math>b_i</math>, and take the diference. Then
 
{{Prooftitle|Third proof (the probabilistic method)| (Due to Alon.)
Let <math>\pi</math> be a uniformly random permutation of <math>X</math>. Define a random maximal chain by
:<math>\mathcal{C}_\pi=\{\{\pi_i\mid 1\le i\le k\}\mid 0\le k\le n\}</math>.
For any <math>S\in\mathcal{F}</math>, let <math>X_S</math> be the 0-1 random variable which indicates whether <math>S\in\mathcal{C}_\pi</math>, that is
:<math>
:<math>
X_S=\begin{cases}
\begin{align}
1 & \mbox{if }S\in\mathcal{C}_\pi,\\
&\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\
0 & \mbox{otherwise.}
&=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\
\end{cases}
&\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\
&\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\
&=c_i.
\end{align}
</math>
</math>
Note that for a uniformly random <math>\pi</math>, <math>\mathcal{C}_\pi</math> has exact one member set of size <math>|S|</math>, uniformly distributed over <math>{X\choose |S|}</math>, thus
:<math>\mathbf{E}[X_S]=\Pr[S\in\mathcal{C}_\pi]=\frac{1}{{n\choose |S|}}</math>.
Let <math>X=\sum_{S\in\mathcal{F}}X_S</math>. Note that <math>X=|\mathcal{F}\cap\mathcal{C}_\pi|</math>. By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{S\in\mathcal{F}}\mathbf{E}[X_S]=\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}</math>.
On the other hand, since <math>\mathcal{F}</math> is an antichain, it can never intersect a chain at more than one elements, thus we always have <math>X=|\mathcal{F}\cap\mathcal{C}_\pi|\le 1</math>. Therefore,
:<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\le \mathbf{E}[X] \le 1</math>.
}}


The Sperner's theorem is an immediate consequence of the LYM inequality.
Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.


{{Theorem|Proposition|
By the law of total expectation,
:<math>\sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\le 1</math> implies that <math>|\mathcal{F}|\le{n\choose \lfloor n/2\rfloor}</math>.
:<math>
}}
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}].
{{Proof|
</math>
It holds that <math>{n\choose k}\le {n\choose \lfloor n/2\rfloor}</math> for any <math>k</math>. Thus,
:<math>1\ge \sum_{S\in\mathcal{F}}\frac{1}{{n\choose |S|}}\ge \frac{|\mathcal{F}|}{{n\choose \lfloor n/2\rfloor}}</math>,
which implies that <math>|\mathcal{F}|\le {n\choose \lfloor n/2\rfloor}</math>.
}}


== Sauer's lemma and VC-dimension ==
We can trivially write <math>\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]</math> as
:<math>
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right].
</math>


=== Shattering and the VC-dimension ===
Hence, the difference is
{{Theorem|Definition (shatter)|
:<math>
:Let <math>\mathcal{F}\subseteq 2^X</math> be set family and let <math>R\subseteq X</math> be a subset. The '''trace''' of <math>\mathcal{F}</math> on <math>R</math>, denoted <math>\mathcal{F}|_R</math> is defined as
\begin{align}
::<math>\mathcal{F}|_R=\{S\cap R\mid S\in\mathcal{F}\}</math>.
&\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\
:We say that <math>\mathcal{F}</math> '''shatters''' <math>R</math> if <math>\mathcal{F}|_R=2^R</math>, i.e. for all <math>T\subseteq R</math>, there exists an <math>S\in\mathcal{F}</math> such that <math>T=S\cap R</math>.
&=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\
}}
&\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\
&\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\
&=c_i.
\end{align}
</math>


The [http://en.wikipedia.org/wiki/VC_dimension '''VC dimension'''] is defined by the power of a family to shatter a set.
The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
 
{{Theorem|Definition (VC-dimension)|
:The '''Vapnik–Chervonenkis dimension''' ('''VC-dimension''') of a set family <math>\mathcal{F}\subseteq 2^X</math>, denoted <math>\text{VC-dim}(\mathcal{F})</math>, is the size of the largest <math>R\subseteq X</math> shattered by <math>\mathcal{F}</math>.
}}
}}


It is a core concept in [http://en.wikipedia.org/wiki/Computational_learning_theory computational learning theory].
== Applications ==


Each subset <math>S\subseteq X</math> can be equivalently represented by its characteristic function <math>f_S:X\rightarrow\{0,1\}</math>, such that for each <math>x\in X</math>,
=== Occupancy problem ===
:<math>f_S(x)=\begin{cases}
Throwing <math>m</math> balls uniformly and independently at random to <math>n</math> bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.
1 & x\in S\\
0 & x\not\in S.
\end{cases}</math>


Then a set family <math>\mathcal{F}\subseteq2^X</math> corresponds to a collection of boolean functions <math>\{f_S\mid S\in\mathcal{F}\}</math>, which is a subset of all Boolean functions in the form <math>f:X\rightarrow\{0,1\}</math>. We wonder on how large a subdomain <math>Y\subseteq X</math>, <math>\mathcal{F}</math> includes all the <math>2^{|Y|}</math> mappings <math>Y\rightarrow\{0,1\}</math>. The largest size of such subdomain is the VC-dimension. It measures how complicated a collection of boolean functions (or equivalently a set family) is.
This problem can be described equivalently as follows. Let <math>f:[m]\rightarrow[n]</math> be a uniform random function from <math>[m]\rightarrow[n]</math>. We ask for the number of <math>i\in[n]</math> that <math>f^{-1}(i)</math> is empty.


=== Sauer's Lemma ===
For any <math>i\in[n]</math>, let <math>X_i</math> indicate the emptiness of bin <math>i</math>. Let <math>X=\sum_{i=1}^nX_i</math> be the number of empty bins.
The definition of the VC-dimension involves enumerating all subsets, thus is difficult to analyze in general. The following famous result state a very simple sufficient condition to lower bound the VC-dimension, regarding only the size of the family. The lemma is due to Sauer, and independently due to Shelah and Perles. A slightly weaker version is found by Vapnik and Chervonenkis, who use the framework to develop a theory of classifications.
:<math>
\mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m.
</math>
By the linearity of expectation,
:<math>
\mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m.
</math>


{{Theorem|Sauer's Lemma (Sauer; Shelah-Perles; Vapnik-Chervonenkis)|
We want to know how <math>X</math> deviates from this expectation. The complication here is that <math>X_i</math> are not independent. So we alternatively look at a sequence of independent random variables <math>Y_1,\ldots, Y_m</math>, where <math>Y_j\in[n]</math> represents the bin into which the <math>j</math>th ball falls. Clearly <math>X</math> is function of <math>Y_1,\ldots, Y_m</math>.
:Let <math>\mathcal{F}\subseteq 2^X</math> where <math>|X|=n</math>. If <math>|\mathcal{F}|>\sum_{1\le i<k}{n\choose i}</math>, then there exists an <math>R\in{X\choose k}</math> such that <math>\mathcal{F}</math> shatters <math>R</math>.
}}
In other words, for any set family <math>\mathcal{F}</math> with <math>|\mathcal{F}|>\sum_{1\le i<k}{n\choose i}</math>, its VC-dimension <math>\text{VC-dim}(\mathcal{F})\ge k</math>.


=== Hereditary family ===
We than observe that changing the value of any <math>Y_i</math> can change the value of <math>X</math> by at most 1, because one ball can affect the emptiness of at most one bin.  
We note the Sauer's lemma is especially easy to prove for a special type of set families, called the '''hereditary''' families.
Thus as a function of independent random variables <math>Y_1,\ldots, Y_m</math>, <math>X</math> satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that
{{Theorem|Definition (hereditary family)|
:<math>
:A set system <math>\mathcal{F}\subseteq 2^X</math> is said to be '''hereditary''' (also called an '''ideal''' or an '''abstract simplicial complex'''), if
\Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2}
::<math>S\subseteq T\in\mathcal{F}</math> implies <math>S\in\mathcal{F}</math>.
</math>
}}
In other words, for a hereditary family <math>\mathcal{F}</math>, if <math>R\in\mathcal{F}</math>, then all subsets of <math>R</math> are also in <math>\mathcal{F}</math>. An immediate consequence is the following proposition.
{{Theorem|Proposition|
:Let <math>\mathcal{F}</math> be a hereditary family. If <math>R\in\mathcal{F}</math> then <math>\mathcal{F}</math> shatters <math>R</math>.
}}


Therefore, it is very easy to prove the Sauer's lemma for hereditary families:
Thus, for sufficiently large <math>n</math> and <math>m</math>, the number of empty bins is tightly concentrated around <math>n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}}</math>
{{Theorem|Lemma|
:For <math>\mathcal{F}\subseteq 2^X</math>, <math>|X|=n</math>, if <math>\mathcal{F}</math> is hereditary and <math>|\mathcal{F}|>\sum_{1\le i<k}{n\choose i}</math> then there exists an <math>R\in{X\choose k}</math> such that <math>\mathcal{F}</math> shatters <math>R</math>.
}}
{{Proof|
Since <math>\mathcal{F}</math> is hereditary, we only need to show that there exists an <math>R\in\mathcal{F}</math> of size <math>|R|\ge k</math>, which must be true, because if all members of <math>\mathcal{F}</math> are of sizes <math><k</math>, then <math>|\mathcal{F}|\le\left|\bigcup_{1\le i<k}{X\choose k}\right|=\sum_{1\le i<k}{n\choose i}</math>, a contradiction.
}}


To prove the Sauer's lemma for general non-hereditary families, we can use some way to reduce arbitrary families to hereditary families. Here we apply the shifting technique to achieve this.
=== Pattern Matching ===
Let <math>\boldsymbol{X}=(X_1,\ldots,X_n)</math> be a sequence of characters chosen independently and uniformly at random from an alphabet <math>\Sigma</math>, where <math>m=|\Sigma|</math>. Let <math>\pi\in\Sigma^k</math> be an arbitrarily fixed string of <math>k</math> characters from <math>\Sigma</math>, called a ''pattern''. Let <math>Y</math> be the number of occurrences of the pattern <math>\pi</math> as a substring of the random string <math>X</math>.


=== Down-shifts ===
By the linearity of expectation, it is obvious that
Note that we work on <math>\mathcal{F}\subseteq2^X</math>, instead of <math>\mathcal{F}\subseteq{X\choose k}</math> like in the Erdős–Ko–Rado theorem, so we do not need to preserve the size of member sets. Instead, we need to reduce an arbitrary family to a hereditary one, thus we use a shift operator which replaces a member set by a subset of it.
:<math>
{{Theorem|Definition (down-shifts)|
\mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k.
: Assume <math>\mathcal{F}\subseteq 2^{[n]}</math>, and <math>i\in[n]</math>. Define the '''down-shift''' operator <math>S_{i}</math> as follows:
</math>
:* for each <math>T\in\mathcal{F}</math>, let
::<math>S_{i}(T)=
\begin{cases}
T\setminus\{i\} & \mbox{if }i\in T \mbox{ and }T\setminus\{i\} \not\in\mathcal{F},\\
T & \mbox{otherwise;}
\end{cases}</math>
:* let <math>S_{i}(\mathcal{F})=\{S_{i}(T)\mid T\in \mathcal{F}\}</math>.
}}


Repeatedly applying <math>S_i</math> to <math>\mathcal{F}</math> for all <math>i\in[n]</math>, due to the finiteness, eventually <math>\mathcal{F}</math> is not changed by any <math>S_i</math>. We call such a family '''down-shifted'''. A family <math>\mathcal{F}</math> is down-shifted if and only if <math>S_i(\mathcal{F})=\mathcal{F}</math> for all <math>i\in[n]</math>. It is then easy to see that a down-shifted <math>\mathcal{F}</math> must be hereditary.
We now look at the concentration of <math>Y</math>. The complication again lies in the dependencies between the matches. Yet we will see that <math>Y</math> is well tightly concentrated around its expectation if <math>k</math> is relatively small compared to <math>n</math>.


{{Theorem|Theorem|
For a fixed pattern <math>\pi</math>, the random variable <math>Y</math> is a function of the independent random variables <math>(X_1,\ldots,X_n)</math>. Any character <math>X_i</math> participates in no more than <math>k</math> matches, thus changing the value of any <math>X_i</math> can affect the value of <math>Y</math> by at most <math>k</math>. <math>Y</math> satisfies the Lipschitz condition with constant <math>k</math>. Apply the method of bounded differences,
:If <math>\mathcal{F}\subseteq2^X</math> is down-shifted, then <math>\mathcal{F}</math> is hereditary.
:<math>
}}
\Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge  tk\sqrt{n}\right]\le 2e^{-t^2/2}
</math>


In order to use down-shift to prove the Sauer's lemma, we need to make sure that down-shift does not decrease <math>|\mathcal{F}|</math> and does not increase the VC-dimension <math>\text{VC-dim}(\mathcal{F})</math>.
=== Combining unit vectors ===
Let <math>u_1,\ldots,u_n</math> be <math>n</math> unit vectors from some normed space. That is, <math>\|u_i\|=1</math> for any <math>1\le i\le n</math>, where <math>\|\cdot\|</math> denote the vector norm (e.g. <math>\ell_1,\ell_2,\ell_\infty</math>) of the space.


{{Theorem|Proposition|
Let <math>\epsilon_1,\ldots,\epsilon_n\in\{-1,+1\}</math> be independently chosen and <math>\Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2</math>.  
# <math>|S_{i}(\mathcal{F})|=\mathcal{F}</math>;
# <math>|S_i(\mathcal{F})|_R|\le |\mathcal{F}|_R|</math>, thus if <math>S_{i}(\mathcal{F})</math> shatters an <math>R</math>, so does <math>\mathcal{F}</math>.
}}
(1) is immediate. (2) is proved by case analysis. We omit the proof.


;Proof of Sauer's lemma
Let
Now we can prove the Sauer's lemma for arbitrary <math>\mathcal{F}\subseteq 2^{[n]}</math>.  
:<math>v=\epsilon_1u_1+\cdots+\epsilon_nu_n,
</math>
and
:<math>
X=\|v\|.
</math>
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable <math>X</math> is well concentrated around its mean.


For any <math>\mathcal{F}\subseteq 2^{[n]}</math>, repeatedly apply <math>S_i</math> for all <math>i\in</math> till the family is down-shifted, which is denoted by <math>\mathcal{F}'</math>. We have proved that <math>|\mathcal{F}'|=|\mathcal{F}|>\sum_{1\le i<k}{n\choose i}</math> and <math>\mathcal{F}'</math> is hereditary, thus as argued before, here exists an <math>R</math> of size <math>k</math> shattered by <math>\mathcal{F}'</math>. By the above proposition, <math>|\mathcal{F}'|_R|\le |\mathcal{F}|_R|</math>, thus <math>\mathcal{F}</math> also shatters <math>R</math>. The lemma is proved.
<math>X</math> is a function of independent random variables <math>\epsilon_1,\ldots,\epsilon_n</math>.
 
By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector <math>u_i</math> can only change the value of <math>X</math> for at most 2, thus <math>X</math> satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:
<math>\square</math>
:<math>
\Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}.
</math>

Latest revision as of 02:23, 19 September 2018

Conditional Expectations

The conditional expectation of a random variable [math]\displaystyle{ Y }[/math] with respect to an event [math]\displaystyle{ \mathcal{E} }[/math] is defined by

[math]\displaystyle{ \mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}]. }[/math]

In particular, if the event [math]\displaystyle{ \mathcal{E} }[/math] is [math]\displaystyle{ X=a }[/math], the conditional expectation

[math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math]

defines a function

[math]\displaystyle{ f(a)=\mathbf{E}[Y\mid X=a]. }[/math]

Thus, [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] can be regarded as a random variable [math]\displaystyle{ f(X) }[/math].

Example
Suppose that we uniformly sample a human from all human beings. Let [math]\displaystyle{ Y }[/math] be his/her height, and let [math]\displaystyle{ X }[/math] be the country where he/she is from. For any country [math]\displaystyle{ a }[/math], [math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math] gives the average height of that country. And [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the random variable which can be defined in either ways:
  • We choose a human uniformly at random from all human beings, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the country where he/she comes from.
  • We choose a country at random with a probability proportional to its population, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the chosen country.

The following proposition states some fundamental facts about conditional expectation.

Proposition (fundamental facts about conditional expectation)
Let [math]\displaystyle{ X,Y }[/math] and [math]\displaystyle{ Z }[/math] be arbitrary random variables. Let [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] be arbitrary functions. Then
  1. [math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math].
  2. [math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math].
  3. [math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].

The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.

The first equation:

[math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math]

says that there are two ways to compute an average. Suppose again that [math]\displaystyle{ X }[/math] is the height of a uniform random human and [math]\displaystyle{ Y }[/math] is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.

The second equation:

[math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math]

is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height [math]\displaystyle{ X }[/math] and the country [math]\displaystyle{ Y }[/math], let [math]\displaystyle{ Z }[/math] be the gender of the individual. Thus, [math]\displaystyle{ \mathbf{E}[X\mid Z] }[/math] is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.

The third equation:

[math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].

looks obscure at the first glance, especially when considering that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any [math]\displaystyle{ X=a }[/math], the function value [math]\displaystyle{ f(X)=f(a) }[/math] becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value [math]\displaystyle{ X=a }[/math],

[math]\displaystyle{ \mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a]. }[/math]

The proposition holds in more general cases when [math]\displaystyle{ X, Y }[/math] and [math]\displaystyle{ Z }[/math] are a sequence of random variables.

Martingales

"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after [math]\displaystyle{ n }[/math] losses, if the [math]\displaystyle{ (n+1) }[/math]th bet wins, then it gives a net profit of

[math]\displaystyle{ 2^n-\sum_{i=1}^{n}2^{i-1}=1, }[/math]

which is a positive number.

However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.

Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots, }[/math], where [math]\displaystyle{ X_0 }[/math] is his initial capital, and [math]\displaystyle{ X_i }[/math] represents his capital after the [math]\displaystyle{ i }[/math]th betting. Up to different betting strategies, [math]\displaystyle{ X_i }[/math] can be arbitrarily dependent on [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math]. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math], we will expect no change in the value of the present variable [math]\displaystyle{ X_{i} }[/math] on average. Random variables satisfying this property is called a martingale sequence.

Definition (martingale)
A sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if for all [math]\displaystyle{ i\gt 0 }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}. \end{align} }[/math]

Examples

coin flips
A fair coin is flipped for a number of times. Let [math]\displaystyle{ Z_j\in\{-1,1\} }[/math] denote the outcome of the [math]\displaystyle{ j }[/math]th flip. Let
[math]\displaystyle{ X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j }[/math].
The random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] defines a martingale.
Proof
We first observe that [math]\displaystyle{ \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}] }[/math], which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the Markov property in statistic processes.
[math]\displaystyle{ \begin{align} \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] &= \mathbf{E}[X_i\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\ &= X_{i-1} \end{align} }[/math]
Polya's urn scheme
Consider an urn (just a container) that initially contains [math]\displaystyle{ b }[/math] balck balls and [math]\displaystyle{ w }[/math] white balls. At each step, we uniformly select a ball from the urn, and replace the ball with [math]\displaystyle{ c }[/math] balls of the same color. Let [math]\displaystyle{ X_0=b/(b+w) }[/math], and [math]\displaystyle{ X_i }[/math] be the fraction of black balls in the urn after the [math]\displaystyle{ i }[/math]th step. The sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale.
edge exposure in a random graph
Consider a random graph [math]\displaystyle{ G }[/math] generated as follows. Let [math]\displaystyle{ [n] }[/math] be the set of vertices, and let [math]\displaystyle{ [m]={[n]\choose 2} }[/math] be the set of all possible edges. For convenience, we enumerate these potential edges by [math]\displaystyle{ e_1,\ldots, e_m }[/math]. For each potential edge [math]\displaystyle{ e_j }[/math], we independently flip a fair coin to decide whether the edge [math]\displaystyle{ e_j }[/math] appears in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ I_j }[/math] be the random variable that indicates whether [math]\displaystyle{ e_j\in G }[/math]. We are interested in some graph-theoretical parameter, say chromatic number, of the random graph [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ X_0=\mathbf{E}[\chi(G)] }[/math], and for each [math]\displaystyle{ i\ge 1 }[/math], let [math]\displaystyle{ X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}] }[/math], namely, the expected chromatic number of the random graph after fixing the first [math]\displaystyle{ i }[/math] edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random graph.

It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.

Generalizations

The martingale can be generalized to be with respect to another sequence of random variables.

Definition (martingale, general version)
A sequence of random variables [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] is a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] if, for all [math]\displaystyle{ i\ge 0 }[/math], the following conditions hold:
  • [math]\displaystyle{ Y_i }[/math] is a function of [math]\displaystyle{ X_0,X_1,\ldots,X_i }[/math];
  • [math]\displaystyle{ \begin{align} \mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}. \end{align} }[/math]

Therefore, a sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if it is a martingale with respect to itself.

The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.

Azuma's Inequality

We introduce a martingale tail inequality, called Azuma's inequality.

Azuma's Inequality
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]

Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.

Second, the condition that

[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k }[/math]

is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale [math]\displaystyle{ X_0,X_1,\ldots }[/math] as a process evolving through time, where [math]\displaystyle{ X_i }[/math] gives some measurement at time [math]\displaystyle{ i }[/math], the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.

A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.

Corollary
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}. \end{align} }[/math]

This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates [math]\displaystyle{ \omega(\sqrt{n}) }[/math] far away from the starting point after [math]\displaystyle{ n }[/math] steps is bounded by [math]\displaystyle{ o(1) }[/math].

Generalization

Azuma's inequality can be generalized to a martingale with respect another sequence.

Azuma's Inequality (general version)
Let [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] be a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |Y_{k}-Y_{k-1}|\le c_k, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|Y_n-Y_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]

The Proof of Azuma's Inueqality

We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence [math]\displaystyle{ Y_i }[/math] conditioning on sequence [math]\displaystyle{ X_i }[/math].

The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.

In order to bound the probability of [math]\displaystyle{ |X_n-X_0|\ge t }[/math], we first bound the upper tail [math]\displaystyle{ \Pr[X_n-X_0\ge t] }[/math]. The bound of the lower tail can be symmetrically proved with the [math]\displaystyle{ X_i }[/math] replaced by [math]\displaystyle{ -X_i }[/math].

Represent the deviation as the sum of differences

We define the martingale difference sequence: for [math]\displaystyle{ i\ge 1 }[/math], let

[math]\displaystyle{ Y_i=X_i-X_{i-1}. }[/math]

It holds that

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}] &=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=X_{i-1}-X_{i-1}\\ &=0. \end{align} }[/math]

The second to the last equation is due to the fact that [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale and the definition of conditional expectation.

Let [math]\displaystyle{ Z_n }[/math] be the accumulated differences

[math]\displaystyle{ Z_n=\sum_{i=1}^n Y_i. }[/math]

The deviation [math]\displaystyle{ (X_n-X_0) }[/math] can be computed by the accumulated differences:

[math]\displaystyle{ \begin{align} X_n-X_0 &=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\ &=\sum_{i=1}^n Y_i\\ &=Z_n. \end{align} }[/math]

We then only need to upper bound the probability of the event [math]\displaystyle{ Z_n\ge t }[/math].

Apply Markov's inequality to the moment generating function

The event [math]\displaystyle{ Z_n\ge t }[/math] is equivalent to that [math]\displaystyle{ e^{\lambda Z_n}\ge e^{\lambda t} }[/math] for [math]\displaystyle{ \lambda\gt 0 }[/math]. Apply Markov's inequality, we have

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\ &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}. \end{align} }[/math]

This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math].

Bound the moment generating functions

The moment generating function

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right] \end{align} }[/math]

The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.

We then upper bound the [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right] }[/math] by a constant. To do so, we need the following technical lemma which is proved by the convexity of [math]\displaystyle{ e^{\lambda Y_n} }[/math].

Lemma
Let [math]\displaystyle{ X }[/math] be a random variable such that [math]\displaystyle{ \mathbf{E}[X]=0 }[/math] and [math]\displaystyle{ |X|\le c }[/math]. Then for [math]\displaystyle{ \lambda\gt 0 }[/math],
[math]\displaystyle{ \mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}. }[/math]
Proof.
Observe that for [math]\displaystyle{ \lambda\gt 0 }[/math], the function [math]\displaystyle{ e^{\lambda X} }[/math] of the variable [math]\displaystyle{ X }[/math] is convex in the interval [math]\displaystyle{ [-c,c] }[/math]. We draw a line between the two endpoints points [math]\displaystyle{ (-c, e^{-\lambda c}) }[/math] and [math]\displaystyle{ (c, e^{\lambda c}) }[/math]. The curve of [math]\displaystyle{ e^{\lambda X} }[/math] lies entirely below this line. Thus,
[math]\displaystyle{ \begin{align} e^{\lambda X} &\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}). \end{align} }[/math]

Since [math]\displaystyle{ \mathbf{E}[X]=0 }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}[e^{\lambda X}] &\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}. \end{align} }[/math]

By expanding both sides as Taylor's series, it can be verified that [math]\displaystyle{ \frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2} }[/math].

[math]\displaystyle{ \square }[/math]

Apply the above lemma to the random variable

[math]\displaystyle{ (Y_n \mid X_0,\ldots,X_{n-1}) }[/math]

We have already shown that its expectation [math]\displaystyle{ \mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0, }[/math] and by the bounded difference condition of Azuma's inequality, we have [math]\displaystyle{ |Y_n|=|(X_n-X_{n-1})|\le c_n. }[/math] Thus, due to the above lemma, it holds that

[math]\displaystyle{ \mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}. }[/math]

Back to our analysis of the expectation [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\ &= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] . \end{align} }[/math]

Apply the same analysis to [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_{n-1}}\right] }[/math], we can solve the above recursion by

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\ &= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right). \end{align} }[/math]

Go back to the Markov's inequality,

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right). \end{align} }[/math]

We then only need to choose a proper [math]\displaystyle{ \lambda\gt 0 }[/math].

Optimization

By choosing [math]\displaystyle{ \lambda=\frac{t}{\sum_{k=1}^n c_k^2} }[/math], we have that

[math]\displaystyle{ \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). }[/math]

Thus, the probability

[math]\displaystyle{ \begin{align} \Pr\left[X_n-X_0\ge t\right] &=\Pr\left[Z_n\ge t\right]\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\ &= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). \end{align} }[/math]

The upper tail of Azuma's inequality is proved. By replacing [math]\displaystyle{ X_i }[/math] by [math]\displaystyle{ -X_i }[/math], the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function [math]\displaystyle{ f }[/math] with respect to a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] is defined by
[math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n. }[/math]
In particular, [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math].

The Doob sequence of a function defines a martingale. That is

[math]\displaystyle{ \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1}, }[/math]

for any [math]\displaystyle{ 0\le i\le n }[/math].

To prove this claim, we recall the definition that [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}] }[/math], thus,

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}] &=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\ &=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\ &=Y_{i-1}, \end{align} }[/math]

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The Doob sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] represents a sequence of refined estimates of the value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math], gradually using more information on the values of the random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math]. The first element [math]\displaystyle{ Y_0 }[/math] is just the expectation of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math]. Element [math]\displaystyle{ Y_i }[/math] is the expected value of [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] when the values of [math]\displaystyle{ X_1,\ldots,X_{i} }[/math] are known, and [math]\displaystyle{ Y_n=f(X_1,\ldots,X_n) }[/math] when [math]\displaystyle{ f(X_1,\ldots,X_n) }[/math] is fully determined by [math]\displaystyle{ X_1,\ldots,X_n }[/math].

The following two Doob martingales arise in evaluating the parameters of random graphs.

edge exposure martingale
Let [math]\displaystyle{ G }[/math] be a random graph on [math]\displaystyle{ n }[/math] vertices. Let [math]\displaystyle{ f }[/math] be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that [math]\displaystyle{ m={n\choose 2} }[/math]. Fix an arbitrary numbering of potential edges between the [math]\displaystyle{ n }[/math] vertices, and denote the edges as [math]\displaystyle{ e_1,\ldots,e_m }[/math]. Let
[math]\displaystyle{ X_i=\begin{cases} 1& \mbox{if }e_i\in G,\\ 0& \mbox{otherwise}. \end{cases} }[/math]
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,m }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the edge exposure martingale.
vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is [math]\displaystyle{ [n] }[/math]. Let [math]\displaystyle{ X_i }[/math] be the subgraph of [math]\displaystyle{ G }[/math] induced by the vertex set [math]\displaystyle{ [i] }[/math], i.e. the first [math]\displaystyle{ i }[/math] vertices.
Let [math]\displaystyle{ Y_0=\mathbf{E}[f(G)] }[/math] and for [math]\displaystyle{ i=1,\ldots,n }[/math], let [math]\displaystyle{ Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i] }[/math].
The sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] gives a Doob martingale that is commonly called the vertex exposure martingale.

Chromatic number

The random graph [math]\displaystyle{ G(n,p) }[/math] is the graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ [n] }[/math], obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability [math]\displaystyle{ p }[/math]. We denote [math]\displaystyle{ G\sim G(n,p) }[/math] if [math]\displaystyle{ G }[/math] is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let [math]\displaystyle{ G\sim G(n,p) }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}. \end{align} }[/math]
Proof.
Consider the vertex exposure martingale
[math]\displaystyle{ Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i] }[/math]

where each [math]\displaystyle{ X_k }[/math] exposes the induced subgraph of [math]\displaystyle{ G }[/math] on vertex set [math]\displaystyle{ [k] }[/math]. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

[math]\displaystyle{ |Y_i-Y_{i-1}|\le 1 }[/math]

is satisfied. Now apply the Azuma's inequality for the martingale [math]\displaystyle{ Y_1,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots,X_n }[/math].

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ t=\omega(1) }[/math], the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math], where [math]\displaystyle{ X_1,\ldots,X_n }[/math] are independent random variables with [math]\displaystyle{ a_i\le X_i\le b_i }[/math] for each [math]\displaystyle{ 1\le i\le n }[/math]. Let [math]\displaystyle{ \mu=\mathbf{E}[X] }[/math]. Then
[math]\displaystyle{ \Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right). }[/math]
Proof.
Define the Doob martingale sequence [math]\displaystyle{ Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right] }[/math]. Obviously [math]\displaystyle{ Y_0=\mu }[/math] and [math]\displaystyle{ Y_n=X }[/math].
[math]\displaystyle{ \begin{align} |Y_i-Y_{i-1}| &= \left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\ &=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\ &=\left|X_i-\mathbf{E}[X_{i}]\right|\\ &\le b_i-a_i \end{align} }[/math]

Apply Azuma's inequality for the martingale [math]\displaystyle{ Y_0,\ldots,Y_n }[/math] with respect to [math]\displaystyle{ X_1,\ldots, X_n }[/math], the Hoeffding's inequality is proved.

[math]\displaystyle{ \square }[/math]

The Bounded Difference Method

Combining Azuma's inequality with the construction of Doob martingales, we have the powerful Bounded Difference Method for concentration of measures.

For arbitrary random variables

Given a sequence of random variables [math]\displaystyle{ X_1,\ldots,X_n }[/math] and a function [math]\displaystyle{ f }[/math]. The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).

Theorem (Method of averaged bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be arbitrary random variables and let [math]\displaystyle{ f }[/math] be a function of [math]\displaystyle{ X_0,X_1,\ldots, X_n }[/math] satisfying that, for all [math]\displaystyle{ 1\le i\le n }[/math],
[math]\displaystyle{ |\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
Define the Doob Martingale sequence [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math] by setting [math]\displaystyle{ Y_0=\mathbf{E}[f(X_1,\ldots,X_n)] }[/math] and, for [math]\displaystyle{ 1\le i\le n }[/math], [math]\displaystyle{ Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i] }[/math]. Then the above theorem is a restatement of the Azuma's inequality holding for [math]\displaystyle{ Y_0,Y_1,\ldots,Y_n }[/math].
[math]\displaystyle{ \square }[/math]

For independent random variables

The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.

Definition (Lipschitz condition)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition, if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1. \end{align} }[/math]

In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.

The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.

Definition (Lipschitz condition, general version)
A function [math]\displaystyle{ f(x_1,\ldots,x_n) }[/math] satisfies the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], if for any [math]\displaystyle{ x_1,\ldots,x_n }[/math] and any [math]\displaystyle{ y_i }[/math],
[math]\displaystyle{ \begin{align} |f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i. \end{align} }[/math]

The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.

Corollary (Method of bounded differences)
Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots, X_n) }[/math] be [math]\displaystyle{ n }[/math] independent random variables and let [math]\displaystyle{ f }[/math] be a function satisfying the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math]. Then
[math]\displaystyle{ \begin{align} \Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right). \end{align} }[/math]
Proof.
For convenience, we denote that [math]\displaystyle{ \boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j) }[/math] for any [math]\displaystyle{ 1\le i\le j\le n }[/math].

We first show that the Lipschitz condition with constants [math]\displaystyle{ c_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math], implies another condition called the averaged Lipschitz condition (ALC): for any [math]\displaystyle{ a_i,b_i }[/math], [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i. }[/math]

And this condition implies the averaged bounded difference condition: for all [math]\displaystyle{ 1\le i\le n }[/math],

[math]\displaystyle{ \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i. }[/math]

Then by applying the method of averaged bounded differences, the corollary can be proved.

For any [math]\displaystyle{ a }[/math], by the law of total expectation,

[math]\displaystyle{ \begin{align} &\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\ &=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\ &= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]. \end{align} }[/math]

Let [math]\displaystyle{ a=a_i }[/math] and [math]\displaystyle{ b_i }[/math], and take the diference. Then

[math]\displaystyle{ \begin{align} &\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\ &=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\ &\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\ &\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\ &=c_i. \end{align} }[/math]

Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.

By the law of total expectation,

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}]. }[/math]

We can trivially write [math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right] }[/math] as

[math]\displaystyle{ \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]. }[/math]

Hence, the difference is

[math]\displaystyle{ \begin{align} &\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\ &=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\ &\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\ &\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\ &=c_i. \end{align} }[/math]

The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.

[math]\displaystyle{ \square }[/math]

Applications

Occupancy problem

Throwing [math]\displaystyle{ m }[/math] balls uniformly and independently at random to [math]\displaystyle{ n }[/math] bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.

This problem can be described equivalently as follows. Let [math]\displaystyle{ f:[m]\rightarrow[n] }[/math] be a uniform random function from [math]\displaystyle{ [m]\rightarrow[n] }[/math]. We ask for the number of [math]\displaystyle{ i\in[n] }[/math] that [math]\displaystyle{ f^{-1}(i) }[/math] is empty.

For any [math]\displaystyle{ i\in[n] }[/math], let [math]\displaystyle{ X_i }[/math] indicate the emptiness of bin [math]\displaystyle{ i }[/math]. Let [math]\displaystyle{ X=\sum_{i=1}^nX_i }[/math] be the number of empty bins.

[math]\displaystyle{ \mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m. }[/math]

By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m. }[/math]

We want to know how [math]\displaystyle{ X }[/math] deviates from this expectation. The complication here is that [math]\displaystyle{ X_i }[/math] are not independent. So we alternatively look at a sequence of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], where [math]\displaystyle{ Y_j\in[n] }[/math] represents the bin into which the [math]\displaystyle{ j }[/math]th ball falls. Clearly [math]\displaystyle{ X }[/math] is function of [math]\displaystyle{ Y_1,\ldots, Y_m }[/math].

We than observe that changing the value of any [math]\displaystyle{ Y_i }[/math] can change the value of [math]\displaystyle{ X }[/math] by at most 1, because one ball can affect the emptiness of at most one bin. Thus as a function of independent random variables [math]\displaystyle{ Y_1,\ldots, Y_m }[/math], [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that

[math]\displaystyle{ \Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2} }[/math]

Thus, for sufficiently large [math]\displaystyle{ n }[/math] and [math]\displaystyle{ m }[/math], the number of empty bins is tightly concentrated around [math]\displaystyle{ n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}} }[/math]

Pattern Matching

Let [math]\displaystyle{ \boldsymbol{X}=(X_1,\ldots,X_n) }[/math] be a sequence of characters chosen independently and uniformly at random from an alphabet [math]\displaystyle{ \Sigma }[/math], where [math]\displaystyle{ m=|\Sigma| }[/math]. Let [math]\displaystyle{ \pi\in\Sigma^k }[/math] be an arbitrarily fixed string of [math]\displaystyle{ k }[/math] characters from [math]\displaystyle{ \Sigma }[/math], called a pattern. Let [math]\displaystyle{ Y }[/math] be the number of occurrences of the pattern [math]\displaystyle{ \pi }[/math] as a substring of the random string [math]\displaystyle{ X }[/math].

By the linearity of expectation, it is obvious that

[math]\displaystyle{ \mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k. }[/math]

We now look at the concentration of [math]\displaystyle{ Y }[/math]. The complication again lies in the dependencies between the matches. Yet we will see that [math]\displaystyle{ Y }[/math] is well tightly concentrated around its expectation if [math]\displaystyle{ k }[/math] is relatively small compared to [math]\displaystyle{ n }[/math].

For a fixed pattern [math]\displaystyle{ \pi }[/math], the random variable [math]\displaystyle{ Y }[/math] is a function of the independent random variables [math]\displaystyle{ (X_1,\ldots,X_n) }[/math]. Any character [math]\displaystyle{ X_i }[/math] participates in no more than [math]\displaystyle{ k }[/math] matches, thus changing the value of any [math]\displaystyle{ X_i }[/math] can affect the value of [math]\displaystyle{ Y }[/math] by at most [math]\displaystyle{ k }[/math]. [math]\displaystyle{ Y }[/math] satisfies the Lipschitz condition with constant [math]\displaystyle{ k }[/math]. Apply the method of bounded differences,

[math]\displaystyle{ \Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge tk\sqrt{n}\right]\le 2e^{-t^2/2} }[/math]

Combining unit vectors

Let [math]\displaystyle{ u_1,\ldots,u_n }[/math] be [math]\displaystyle{ n }[/math] unit vectors from some normed space. That is, [math]\displaystyle{ \|u_i\|=1 }[/math] for any [math]\displaystyle{ 1\le i\le n }[/math], where [math]\displaystyle{ \|\cdot\| }[/math] denote the vector norm (e.g. [math]\displaystyle{ \ell_1,\ell_2,\ell_\infty }[/math]) of the space.

Let [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n\in\{-1,+1\} }[/math] be independently chosen and [math]\displaystyle{ \Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2 }[/math].

Let

[math]\displaystyle{ v=\epsilon_1u_1+\cdots+\epsilon_nu_n, }[/math]

and

[math]\displaystyle{ X=\|v\|. }[/math]

This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable [math]\displaystyle{ X }[/math] is well concentrated around its mean.

[math]\displaystyle{ X }[/math] is a function of independent random variables [math]\displaystyle{ \epsilon_1,\ldots,\epsilon_n }[/math]. By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector [math]\displaystyle{ u_i }[/math] can only change the value of [math]\displaystyle{ X }[/math] for at most 2, thus [math]\displaystyle{ X }[/math] satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:

[math]\displaystyle{ \Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}. }[/math]