高级算法 (Fall 2019)/Concentration of measure: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
Line 463: Line 463:
</math>
</math>
The upper tail of Azuma's inequality is proved. By replacing <math>X_i</math> by <math>-X_i</math>, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.
The upper tail of Azuma's inequality is proved. By replacing <math>X_i</math> by <math>-X_i</math>, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.
=The Doob martingales =
The following definition describes a very general approach for constructing an important type of martingales.
{{Theorem
|Definition (The Doob sequence)|
: The Doob sequence of a function <math>f</math> with respect to a sequence of random variables <math>X_1,\ldots,X_n</math> is defined by
::<math>
Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n.
</math>
:In particular, <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and <math>Y_n=f(X_1,\ldots,X_n)</math>.
}}
The Doob sequence of a function defines a martingale. That is
::<math>
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1},
</math>
for any <math>0\le i\le n</math>.
To prove this claim, we recall the definition that <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]</math>, thus,
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]
&=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\
&=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\
&=Y_{i-1},
\end{align}
</math>
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function <math>f(X_1,\ldots,X_n)</math> of random variables <math>X_1,\ldots,X_n</math>. The Doob sequence <math>Y_0,Y_1,\ldots,Y_n</math> represents a sequence of refined estimates of the value of <math>f(X_1,\ldots,X_n)</math>, gradually using more information on the values of the random variables <math>X_1,\ldots,X_n</math>. The first element <math>Y_0</math> is just the expectation of <math>f(X_1,\ldots,X_n)</math>. Element <math>Y_i</math> is the expected value of <math>f(X_1,\ldots,X_n)</math> when the values of <math>X_1,\ldots,X_{i}</math> are known, and <math>Y_n=f(X_1,\ldots,X_n)</math> when <math>f(X_1,\ldots,X_n)</math> is fully determined by <math>X_1,\ldots,X_n</math>.
The following two Doob martingales arise in evaluating the parameters of random graphs.
;edge exposure martingale
:Let <math>G</math> be a random graph on <math>n</math> vertices. Let <math>f</math> be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that <math>m={n\choose 2}</math>. Fix an arbitrary numbering of potential edges between the <math>n</math> vertices, and denote the edges as <math>e_1,\ldots,e_m</math>. Let
::<math>
X_i=\begin{cases}
1& \mbox{if }e_i\in G,\
0& \mbox{otherwise}.
\end{cases}
</math>
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,m</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''edge exposure martingale'''.
;vertex exposure martingale
: Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is <math>[n]</math>. Let <math>X_i</math> be the subgraph of <math>G</math> induced by the vertex set <math>[i]</math>, i.e. the first <math>i</math> vertices.
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,n</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''vertex exposure martingale'''.
===Chromatic number===
The random graph <math>G(n,p)</math> is the graph on <math>n</math> vertices <math>[n]</math>, obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability <math>p</math>. We denote <math>G\sim G(n,p)</math> if <math>G</math> is generated in this way.
{{Theorem
|Theorem [Shamir and Spencer (1987)]|
:Let <math>G\sim G(n,p)</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Then
::<math>\begin{align}
\Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}.
\end{align}</math>
}}
{{Proof| Consider the vertex exposure martingale
:<math>
Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i]
</math>
where each <math>X_k</math> exposes the induced subgraph of <math>G</math> on vertex set <math>[k]</math>. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition
:<math>
|Y_i-Y_{i-1}|\le 1
</math>
is satisfied. Now apply the Azuma's inequality for the martingale <math>Y_1,\ldots,Y_n</math> with respect to <math>X_1,\ldots,X_n</math>.
}}
For <math>t=\omega(1)</math>, the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.
=== Hoeffding's Inequality===
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent ''trials''. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
{{Theorem
|Hoeffding's inequality|
: Let <math>X=\sum_{i=1}^nX_i</math>, where <math>X_1,\ldots,X_n</math> are independent random variables with <math>a_i\le X_i\le b_i</math> for each <math>1\le i\le n</math>. Let <math>\mu=\mathbf{E}[X]</math>. Then
::<math>
\Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right).
</math>
}}
{{Proof| Define the Doob martingale sequence <math>Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right]</math>. Obviously <math>Y_0=\mu</math> and <math>Y_n=X</math>.
:<math>
\begin{align}
|Y_i-Y_{i-1}|
&=
\left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\
&=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\
&=\left|X_i-\mathbf{E}[X_{i}]\right|\
&\le b_i-a_i
\end{align}
</math>
Apply Azuma's inequality for the martingale <math>Y_0,\ldots,Y_n</math> with respect to <math>X_1,\ldots, X_n</math>,  the Hoeffding's inequality is proved.
}}

Revision as of 06:08, 8 October 2019

Chernoff Bound

Suppose that we have a fair coin. If we toss it once, then the outcome is completely unpredictable. But if we toss it, say for 1000 times, then the number of HEADs is very likely to be around 500. This phenomenon, as illustrated in the following figure, is called the concentration of measure. The Chernoff bound is an inequality that characterizes the concentration phenomenon for the sum of independent trials.

Before formally stating the Chernoff bound, let's introduce the moment generating function.

Moment generating functions

The more we know about the moments of a random variable X, the more information we would have about X. There is a so-called moment generating function, which "packs" all the information about the moments of X into one function.

Definition
The moment generating function of a random variable X is defined as E[eλX] where λ is the parameter of the function.

By Taylor's expansion and the linearity of expectations,

E[eλX]=E[k=0λkk!Xk]=k=0λkk!E[Xk]

The moment generating function E[eλX] is a function of λ.

The Chernoff bound

The Chernoff bounds are exponentially sharp tail inequalities for the sum of independent trials. The bounds are obtained by applying Markov's inequality to the moment generating function of the sum of independent trials, with some appropriate choice of the parameter λ.

Chernoff bound (the upper tail)
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X].
Then for any δ>0,
Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ.
Proof.
For any λ>0, X(1+δ)μ is equivalent to that eλXeλ(1+δ)μ, thus
Pr[X(1+δ)μ]=Pr[eλXeλ(1+δ)μ]E[eλX]eλ(1+δ)μ,

where the last step follows by Markov's inequality.

Computing the moment generating function E[eλX]:

E[eλX]=E[eλi=1nXi]=E[i=1neλXi]=i=1nE[eλXi].(for independent random variables)

Let pi=Pr[Xi=1] for i=1,2,,n. Then,

μ=E[X]=E[i=1nXi]=i=1nE[Xi]=i=1npi.

We bound the moment generating function for each individual Xi as follows.

E[eλXi]=pieλ1+(1pi)eλ0=1+pi(eλ1)epi(eλ1),

where in the last step we apply the Taylor's expansion so that ey1+y where y=pi(eλ1)0. (By doing this, we can transform the product to the sum of pi, which is μ.)

Therefore,

E[eλX]=i=1nE[eλXi]i=1nepi(eλ1)=exp(i=1npi(eλ1))=e(eλ1)μ.

Thus, we have shown that for any λ>0,

Pr[X(1+δ)μ]E[eλX]eλ(1+δ)μe(eλ1)μeλ(1+δ)μ=(e(eλ1)eλ(1+δ))μ.

For any δ>0, we can let λ=ln(1+δ)>0 to get

Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ.

The idea of the proof is actually quite clear: we apply Markov's inequality to eλX and for the rest, we just estimate the moment generating function E[eλX]. To make the bound as tight as possible, we minimized the e(eλ1)eλ(1+δ) by setting λ=ln(1+δ), which can be justified by taking derivatives of e(eλ1)eλ(1+δ).


We then proceed to the lower tail, the probability that the random variable deviates below the mean value:

Chernoff bound (the lower tail)
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X].
Then for any 0<δ<1,
Pr[X(1δ)μ](eδ(1δ)(1δ))μ.
Proof.
For any λ<0, by the same analysis as in the upper tail version,
Pr[X(1δ)μ]=Pr[eλXeλ(1δ)μ]E[eλX]eλ(1δ)μ(e(eλ1)eλ(1δ))μ.

For any 0<δ<1, we can let λ=ln(1δ)<0 to get

Pr[X(1δ)μ](eδ(1δ)(1δ))μ.

Useful forms of the Chernoff bounds

Some useful special forms of the bounds can be derived directly from the above general forms of the bounds. We now know better why we say that the bounds are exponentially sharp.

Useful forms of the Chernoff bound
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X]. Then
1. for 0<δ1,
Pr[X(1+δ)μ]<exp(μδ23);
Pr[X(1δ)μ]<exp(μδ22);
2. for t2eμ,
Pr[Xt]2t.
Proof.
To obtain the bounds in (1), we need to show that for 0<δ<1, eδ(1+δ)(1+δ)eδ2/3 and eδ(1δ)(1δ)eδ2/2. We can verify both inequalities by standard analysis techniques.

To obtain the bound in (2), let t=(1+δ)μ. Then δ=t/μ12e1. Hence,

Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ(e1+δ)(1+δ)μ(e2e)t2t

Applications to balls-into-bins

Throwing m balls uniformly and independently to n bins, what is the maximum load of all bins with high probability? In the last class, we gave an analysis of this problem by using a counting argument.

Now we give a more "advanced" analysis by using Chernoff bounds.


For any i[n] and j[m], let Xij be the indicator variable for the event that ball j is thrown to bin i. Obviously

E[Xij]=Pr[ball j is thrown to bin i]=1n

Let Yi=j[m]Xij be the load of bin i.


Then the expected load of bin i is

()μ=E[Yi]=E[j[m]Xij]=j[m]E[Xij]=m/n.

For the case m=n, it holds that μ=1

Note that Yi is a sum of m mutually independent indicator variable. Applying Chernoff bound, for any particular bin i[n],

Pr[Yi>(1+δ)μ](eδ(1+δ)1+δ)μ.

The m=n case

When m=n, μ=1. Write c=1+δ. The above bound can be written as

Pr[Yi>c]ec1cc.

Let c=elnnlnlnn, we evaluate ec1cc by taking logarithm to its reciprocal.

ln(ccec1)=clncc+1=c(lnc1)+1=elnnlnlnn(lnlnnlnlnlnn)+1elnnlnlnn2elnlnn+12lnn.

Thus,

Pr[Yi>elnnlnlnn]1n2.

Applying the union bound, the probability that there exists a bin with load >12lnn is

nPr[Y1>elnnlnlnn]1n.

Therefore, for m=n, with high probability, the maximum load is O(elnnlnlnn).

The m>lnn case

When mnlnn, then according to (), μ=mnlnn

We can apply an easier form of the Chernoff bounds,

Pr[Yi2eμ]22eμ22elnn<1n2.

By the union bound, the probability that there exists a bin with load 2emn is,

nPr[Y1>2emn]=nPr[Y1>2eμ]1n.

Therefore, for mnlnn, with high probability, the maximum load is O(mn).

Martingales

"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after n losses, if the (n+1)th bet wins, then it gives a net profit of

2ni=1n2i1=1,

which is a positive number.

However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.

Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables X0,X1,,, where X0 is his initial capital, and Xi represents his capital after the ith betting. Up to different betting strategies, Xi can be arbitrarily dependent on X0,,Xi1. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables X0,,Xi1, we will expect no change in the value of the present variable Xi on average. Random variables satisfying this property is called a martingale sequence.

Definition (martingale)
A sequence of random variables X0,X1, is a martingale if for all i>0,
E[XiX0,,Xi1]=Xi1.

The martingale can be generalized to be with respect to another sequence of random variables.

Definition (martingale, general version)
A sequence of random variables Y0,Y1, is a martingale with respect to the sequence X0,X1, if, for all i0, the following conditions hold:
  • Yi is a function of X0,X1,,Xi;
  • E[Yi+1X0,,Xi]=Yi.

Therefore, a sequence X0,X1, is a martingale if it is a martingale with respect to itself.

The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.

Azuma's Inequality

The Azuma's inequality is a martingale tail inequality.

Azuma's Inequality
Let X0,X1, be a martingale such that, for all k1,
|XkXk1|ck,
Then
Pr[|XnX0|t]2exp(t22k=1nck2).

Unlike the Chernoff bounds, there is no assumption of independence, which makes the martingale inequalities more useful.

The following bounded difference condition

|XkXk1|ck

says that the martingale X0,X1, as a process evolving over time, never makes big change in a single step.

The Azuma's inequality says that for any martingale satisfying the bounded difference condition, it is unlikely that process wanders far from its starting point.

A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.

Corollary
Let X0,X1, be a martingale such that, for all k1,
|XkXk1|c,
Then
Pr[|XnX0|ctn]2et2/2.

This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates ω(n) far away from the starting point after n steps is bounded by o(1).

Generalization

Azuma's inequality can be generalized to a martingale with respect another sequence.

Azuma's Inequality (general version)
Let Y0,Y1, be a martingale with respect to the sequence X0,X1, such that, for all k1,
|YkYk1|ck,
Then
Pr[|YnY0|t]2exp(t22k=1nck2).

The Proof of Azuma's Inueqality

We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence Yi conditioning on sequence Xi.

The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.

In order to bound the probability of |XnX0|t, we first bound the upper tail Pr[XnX0t]. The bound of the lower tail can be symmetrically proved with the Xi replaced by Xi.

Represent the deviation as the sum of differences

We define the martingale difference sequence: for i1, let

Yi=XiXi1.

It holds that

E[YiX0,,Xi1]=E[XiXi1X0,,Xi1]=E[XiX0,,Xi1]E[Xi1X0,,Xi1]=Xi1Xi1=0.

The second to the last equation is due to the fact that X0,X1, is a martingale and the definition of conditional expectation.

Let Zn be the accumulated differences

Zn=i=1nYi.

The deviation (XnX0) can be computed by the accumulated differences:

XnX0=(X1X0)+(X2X1)++(XnXn1)=i=1nYi=Zn.

We then only need to upper bound the probability of the event Znt.

Apply Markov's inequality to the moment generating function

The event Znt is equivalent to that eλZneλt for λ>0. Apply Markov's inequality, we have

Pr[Znt]=Pr[eλZneλt]E[eλZn]eλt.

This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function E[eλZn].

Bound the moment generating functions

The moment generating function

E[eλZn]=E[E[eλZnX0,,Xn1]]=E[E[eλ(Zn1+Yn)X0,,Xn1]]=E[E[eλZn1eλYnX0,,Xn1]]=E[eλZn1E[eλYnX0,,Xn1]]

The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.

We then upper bound the E[eλYnX0,,Xn1] by a constant. To do so, we need the following technical lemma which is proved by the convexity of eλYn.

Lemma
Let X be a random variable such that E[X]=0 and |X|c. Then for λ>0,
E[eλX]eλ2c2/2.
Proof.
Observe that for λ>0, the function eλX of the variable X is convex in the interval [c,c]. We draw a line between the two endpoints points (c,eλc) and (c,eλc). The curve of eλX lies entirely below this line. Thus,
eλXcX2ceλc+c+X2ceλc=eλc+eλc2+X2c(eλceλc).

Since E[X]=0, we have

E[eλX]E[eλc+eλc2+X2c(eλceλc)]=eλc+eλc2+eλceλc2cE[X]=eλc+eλc2.

By expanding both sides as Taylor's series, it can be verified that eλc+eλc2eλ2c2/2.

Apply the above lemma to the random variable

(YnX0,,Xn1)

We have already shown that its expectation E[(YnX0,,Xn1)]=0, and by the bounded difference condition of Azuma's inequality, we have |Yn|=|(XnXn1)|cn. Thus, due to the above lemma, it holds that

E[eλYnX0,,Xn1]eλ2cn2/2.

Back to our analysis of the expectation E[eλZn], we have

E[eλZn]=E[eλZn1E[eλYnX0,,Xn1]]E[eλZn1eλ2cn2/2]=eλ2cn2/2E[eλZn1].

Apply the same analysis to E[eλZn1], we can solve the above recursion by

E[eλZn]k=1neλ2ck2/2=exp(λ2k=1nck2/2).

Go back to the Markov's inequality,

Pr[Znt]E[eλZn]eλtexp(λ2k=1nck2/2λt).

We then only need to choose a proper λ>0.

Optimization

By choosing λ=tk=1nck2, we have that

exp(λ2k=1nck2/2λt)=exp(t22k=1nck2).

Thus, the probability

Pr[XnX0t]=Pr[Znt]exp(λ2k=1nck2/2λt)=exp(t22k=1nck2).

The upper tail of Azuma's inequality is proved. By replacing Xi by Xi, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.

The Doob martingales

The following definition describes a very general approach for constructing an important type of martingales.

Definition (The Doob sequence)
The Doob sequence of a function f with respect to a sequence of random variables X1,,Xn is defined by
Yi=E[f(X1,,Xn)X1,,Xi],0in.
In particular, Y0=E[f(X1,,Xn)] and Yn=f(X1,,Xn).

The Doob sequence of a function defines a martingale. That is

E[YiX1,,Xi1]=Yi1,

for any 0in.

To prove this claim, we recall the definition that Yi=E[f(X1,,Xn)X1,,Xi], thus,

E[YiX1,,Xi1]=E[E[f(X1,,Xn)X1,,Xi]X1,,Xi1]=E[f(X1,,Xn)X1,,Xi1]=Yi1,

where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.

The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function f(X1,,Xn) of random variables X1,,Xn. The Doob sequence Y0,Y1,,Yn represents a sequence of refined estimates of the value of f(X1,,Xn), gradually using more information on the values of the random variables X1,,Xn. The first element Y0 is just the expectation of f(X1,,Xn). Element Yi is the expected value of f(X1,,Xn) when the values of X1,,Xi are known, and Yn=f(X1,,Xn) when f(X1,,Xn) is fully determined by X1,,Xn.

The following two Doob martingales arise in evaluating the parameters of random graphs.

edge exposure martingale
Let G be a random graph on n vertices. Let f be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that m=(n2). Fix an arbitrary numbering of potential edges between the n vertices, and denote the edges as e1,,em. Let
Xi={1if eiG,0otherwise.
Let Y0=E[f(G)] and for i=1,,m, let Yi=E[f(G)X1,,Xi].
The sequence Y0,Y1,,Yn gives a Doob martingale that is commonly called the edge exposure martingale.
vertex exposure martingale
Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is [n]. Let Xi be the subgraph of G induced by the vertex set [i], i.e. the first i vertices.
Let Y0=E[f(G)] and for i=1,,n, let Yi=E[f(G)X1,,Xi].
The sequence Y0,Y1,,Yn gives a Doob martingale that is commonly called the vertex exposure martingale.

Chromatic number

The random graph G(n,p) is the graph on n vertices [n], obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability p. We denote GG(n,p) if G is generated in this way.

Theorem [Shamir and Spencer (1987)]
Let GG(n,p). Let χ(G) be the chromatic number of G. Then
Pr[|χ(G)E[χ(G)]|tn]2et2/2.
Proof.
Consider the vertex exposure martingale
Yi=E[χ(G)X1,,Xi]

where each Xk exposes the induced subgraph of G on vertex set [k]. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition

|YiYi1|1

is satisfied. Now apply the Azuma's inequality for the martingale Y1,,Yn with respect to X1,,Xn.

For t=ω(1), the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.

Hoeffding's Inequality

The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent trials. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.

Hoeffding's inequality
Let X=i=1nXi, where X1,,Xn are independent random variables with aiXibi for each 1in. Let μ=E[X]. Then
Pr[|Xμ|t]2exp(t22i=1n(biai)2).
Proof.
Define the Doob martingale sequence Yi=E[j=1nXj|X1,,Xi]. Obviously Y0=μ and Yn=X.
|YiYi1|=|E[j=1nXj|X0,,Xi]E[j=1nXj|X0,,Xi1]|=|j=1iXi+j=i+1nE[Xj]j=1i1Xij=inE[Xj]|=|XiE[Xi]|biai

Apply Azuma's inequality for the martingale Y0,,Yn with respect to X1,,Xn, the Hoeffding's inequality is proved.