随机算法 (Spring 2013)/Concentration of Measure: Difference between revisions
imported>Etone |
imported>Etone |
||
Line 94: | Line 94: | ||
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context. | It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context. | ||
== Generalizations == | |||
The martingale can be generalized to be with respect to another sequence of random variables. | |||
{{Theorem | |||
|Definition (martingale, general version)| | |||
:A sequence of random variables <math>Y_0,Y_1,\ldots</math> is a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> if, for all <math>i\ge 0</math>, the following conditions hold: | |||
:* <math>Y_i</math> is a function of <math>X_0,X_1,\ldots,X_i</math>; | |||
:* <math>\begin{align} | |||
\mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}. | |||
\end{align}</math> | |||
}} | |||
Therefore, a sequence <math>X_0,X_1,\ldots</math> is a martingale if it is a martingale with respect to itself. | |||
The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself. | |||
=Azuma's Inequality= | =Azuma's Inequality= |
Revision as of 06:14, 18 April 2013
Conditional Expectations
The conditional expectation of a random variable [math]\displaystyle{ Y }[/math] with respect to an event [math]\displaystyle{ \mathcal{E} }[/math] is defined by
- [math]\displaystyle{ \mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}]. }[/math]
In particular, if the event [math]\displaystyle{ \mathcal{E} }[/math] is [math]\displaystyle{ X=a }[/math], the conditional expectation
- [math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math]
defines a function
- [math]\displaystyle{ f(a)=\mathbf{E}[Y\mid X=a]. }[/math]
Thus, [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] can be regarded as a random variable [math]\displaystyle{ f(X) }[/math].
- Example
- Suppose that we uniformly sample a human from all human beings. Let [math]\displaystyle{ Y }[/math] be his/her height, and let [math]\displaystyle{ X }[/math] be the country where he/she is from. For any country [math]\displaystyle{ a }[/math], [math]\displaystyle{ \mathbf{E}[Y\mid X=a] }[/math] gives the average height of that country. And [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the random variable which can be defined in either ways:
- We choose a human uniformly at random from all human beings, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the country where he/she comes from.
- We choose a country at random with a probability proportional to its population, and [math]\displaystyle{ \mathbf{E}[Y\mid X] }[/math] is the average height of the chosen country.
The following proposition states some fundamental facts about conditional expectation.
Proposition (fundamental facts about conditional expectation) - Let [math]\displaystyle{ X,Y }[/math] and [math]\displaystyle{ Z }[/math] be arbitrary random variables. Let [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] be arbitrary functions. Then
- [math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math].
- [math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math].
- [math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].
- Let [math]\displaystyle{ X,Y }[/math] and [math]\displaystyle{ Z }[/math] be arbitrary random variables. Let [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math] be arbitrary functions. Then
The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.
The first equation:
- [math]\displaystyle{ \mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]] }[/math]
says that there are two ways to compute an average. Suppose again that [math]\displaystyle{ X }[/math] is the height of a uniform random human and [math]\displaystyle{ Y }[/math] is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.
The second equation:
- [math]\displaystyle{ \mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z] }[/math]
is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height [math]\displaystyle{ X }[/math] and the country [math]\displaystyle{ Y }[/math], let [math]\displaystyle{ Z }[/math] be the gender of the individual. Thus, [math]\displaystyle{ \mathbf{E}[X\mid Z] }[/math] is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.
The third equation:
- [math]\displaystyle{ \mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]] }[/math].
looks obscure at the first glance, especially when considering that [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any [math]\displaystyle{ X=a }[/math], the function value [math]\displaystyle{ f(X)=f(a) }[/math] becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value [math]\displaystyle{ X=a }[/math],
- [math]\displaystyle{ \mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a]. }[/math]
The proposition holds in more general cases when [math]\displaystyle{ X, Y }[/math] and [math]\displaystyle{ Z }[/math] are a sequence of random variables.
Martingales
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after [math]\displaystyle{ n }[/math] losses, if the [math]\displaystyle{ (n+1) }[/math]th bet wins, then it gives a net profit of
- [math]\displaystyle{ 2^n-\sum_{i=1}^{n}2^{i-1}=1, }[/math]
which is a positive number.
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life. And remember: gambling is bad!
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots, }[/math], where [math]\displaystyle{ X_0 }[/math] is his initial capital, and [math]\displaystyle{ X_i }[/math] represents his capital after the [math]\displaystyle{ i }[/math]th betting. Up to different betting strategies, [math]\displaystyle{ X_i }[/math] can be arbitrarily dependent on [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math]. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables [math]\displaystyle{ X_0,\ldots,X_{i-1} }[/math], we will expect no change in the value of the present variable [math]\displaystyle{ X_{i} }[/math] on average. Random variables satisfying this property is called a martingale sequence.
Definition (martingale) - A sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if for all [math]\displaystyle{ i\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}. \end{align} }[/math]
- A sequence of random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if for all [math]\displaystyle{ i\gt 0 }[/math],
Examples
- coin flips
- A fair coin is flipped for a number of times. Let [math]\displaystyle{ Z_j\in\{-1,1\} }[/math] denote the outcome of the [math]\displaystyle{ j }[/math]th flip. Let
- [math]\displaystyle{ X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j }[/math].
- The random variables [math]\displaystyle{ X_0,X_1,\ldots }[/math] defines a martingale.
- Proof
- We first observe that [math]\displaystyle{ \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}] }[/math], which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the Markov property in statistic processes.
- [math]\displaystyle{ \begin{align} \mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] &= \mathbf{E}[X_i\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\ &= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\ &= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\ &= X_{i-1} \end{align} }[/math]
- Polya's urn scheme
- Consider an urn (just a container) that initially contains [math]\displaystyle{ b }[/math] balck balls and [math]\displaystyle{ w }[/math] white balls. At each step, we uniformly select a ball from the urn, and replace the ball with [math]\displaystyle{ c }[/math] balls of the same color. Let [math]\displaystyle{ X_0=b/(b+w) }[/math], and [math]\displaystyle{ X_i }[/math] be the fraction of black balls in the urn after the [math]\displaystyle{ i }[/math]th step. The sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale.
- edge exposure in a random graph
- Consider a random graph [math]\displaystyle{ G }[/math] generated as follows. Let [math]\displaystyle{ [n] }[/math] be the set of vertices, and let [math]\displaystyle{ [m]={[n]\choose 2} }[/math] be the set of all possible edges. For convenience, we enumerate these potential edges by [math]\displaystyle{ e_1,\ldots, e_m }[/math]. For each potential edge [math]\displaystyle{ e_j }[/math], we independently flip a fair coin to decide whether the edge [math]\displaystyle{ e_j }[/math] appears in [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ I_j }[/math] be the random variable that indicates whether [math]\displaystyle{ e_j\in G }[/math]. We are interested in some graph-theoretical parameter, say chromatic number, of the random graph [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ \chi(G) }[/math] be the chromatic number of [math]\displaystyle{ G }[/math]. Let [math]\displaystyle{ X_0=\mathbf{E}[\chi(G)] }[/math], and for each [math]\displaystyle{ i\ge 1 }[/math], let [math]\displaystyle{ X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}] }[/math], namely, the expected chromatic number of the random graph after fixing the first [math]\displaystyle{ i }[/math] edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random grpah.
- As shown by the above figure, the sequence [math]\displaystyle{ X_0,X_1,\ldots,X_m }[/math] is a martingale. In particular, [math]\displaystyle{ X_0=\mathbf{E}[\chi(G)] }[/math], and [math]\displaystyle{ X_m=\chi(G) }[/math]. The martingale [math]\displaystyle{ X_0,X_1,\ldots,X_m }[/math] moves from no information to full information (of the random graph [math]\displaystyle{ G }[/math]) in small steps.
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.
Generalizations
The martingale can be generalized to be with respect to another sequence of random variables.
Definition (martingale, general version) - A sequence of random variables [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] is a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] if, for all [math]\displaystyle{ i\ge 0 }[/math], the following conditions hold:
- [math]\displaystyle{ Y_i }[/math] is a function of [math]\displaystyle{ X_0,X_1,\ldots,X_i }[/math];
- [math]\displaystyle{ \begin{align} \mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}. \end{align} }[/math]
- A sequence of random variables [math]\displaystyle{ Y_0,Y_1,\ldots }[/math] is a martingale with respect to the sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] if, for all [math]\displaystyle{ i\ge 0 }[/math], the following conditions hold:
Therefore, a sequence [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale if it is a martingale with respect to itself.
The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.
Azuma's Inequality
We introduce a martingale tail inequality, called Azuma's inequality.
Azuma's Inequality - Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
- [math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k, }[/math]
- Then
- [math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]
- Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.
Second, the condition that
- [math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k }[/math]
is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale [math]\displaystyle{ X_0,X_1,\ldots }[/math] as a process evolving through time, where [math]\displaystyle{ X_i }[/math] gives some measurement at time [math]\displaystyle{ i }[/math], the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.
A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.
Corollary - Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
- [math]\displaystyle{ |X_{k}-X_{k-1}|\le c, }[/math]
- Then
- [math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}. \end{align} }[/math]
- Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates [math]\displaystyle{ \omega(\sqrt{n}) }[/math] far away from the starting point after [math]\displaystyle{ n }[/math] steps is bounded by [math]\displaystyle{ o(1) }[/math].
The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.
In order to bound the probability of [math]\displaystyle{ |X_n-X_0|\ge t }[/math], we first bound the upper tail [math]\displaystyle{ \Pr[X_n-X_0\ge t] }[/math]. The bound of the lower tail can be symmetrically proved with the [math]\displaystyle{ X_i }[/math] replaced by [math]\displaystyle{ -X_i }[/math].
Represent the deviation as the sum of differences
We define the martingale difference sequence: for [math]\displaystyle{ i\ge 1 }[/math], let
- [math]\displaystyle{ Y_i=X_i-X_{i-1}. }[/math]
It holds that
- [math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}] &=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=X_{i-1}-X_{i-1}\\ &=0. \end{align} }[/math]
The second to the last equation is due to the fact that [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale and the definition of conditional expectation.
Let [math]\displaystyle{ Z_n }[/math] be the accumulated differences
- [math]\displaystyle{ Z_n=\sum_{i=1}^n Y_i. }[/math]
The deviation [math]\displaystyle{ (X_n-X_0) }[/math] can be computed by the accumulated differences:
- [math]\displaystyle{ \begin{align} X_n-X_0 &=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\ &=\sum_{i=1}^n Y_i\\ &=Z_n. \end{align} }[/math]
We then only need to upper bound the probability of the event [math]\displaystyle{ Z_n\ge t }[/math].
Apply Markov's inequality to the moment generating function
The event [math]\displaystyle{ Z_n\ge t }[/math] is equivalent to that [math]\displaystyle{ e^{\lambda Z_n}\ge e^{\lambda t} }[/math] for [math]\displaystyle{ \lambda\gt 0 }[/math]. Apply Markov's inequality, we have
- [math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\ &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}. \end{align} }[/math]
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math].
Bound the moment generating functions
The moment generating function
- [math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right] \end{align} }[/math]
The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.
We then upper bound the [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right] }[/math] by a constant. To do so, we need the following technical lemma which is proved by the convexity of [math]\displaystyle{ e^{\lambda Y_n} }[/math].
Lemma - Let [math]\displaystyle{ X }[/math] be a random variable such that [math]\displaystyle{ \mathbf{E}[X]=0 }[/math] and [math]\displaystyle{ |X|\le c }[/math]. Then for [math]\displaystyle{ \lambda\gt 0 }[/math],
- [math]\displaystyle{ \mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}. }[/math]
- Let [math]\displaystyle{ X }[/math] be a random variable such that [math]\displaystyle{ \mathbf{E}[X]=0 }[/math] and [math]\displaystyle{ |X|\le c }[/math]. Then for [math]\displaystyle{ \lambda\gt 0 }[/math],
Proof. Observe that for [math]\displaystyle{ \lambda\gt 0 }[/math], the function [math]\displaystyle{ e^{\lambda X} }[/math] of the variable [math]\displaystyle{ X }[/math] is convex in the interval [math]\displaystyle{ [-c,c] }[/math]. We draw a line between the two endpoints points [math]\displaystyle{ (-c, e^{-\lambda c}) }[/math] and [math]\displaystyle{ (c, e^{\lambda c}) }[/math]. The curve of [math]\displaystyle{ e^{\lambda X} }[/math] lies entirely below this line. Thus, - [math]\displaystyle{ \begin{align} e^{\lambda X} &\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}). \end{align} }[/math]
Since [math]\displaystyle{ \mathbf{E}[X]=0 }[/math], we have
- [math]\displaystyle{ \begin{align} \mathbf{E}[e^{\lambda X}] &\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}. \end{align} }[/math]
By expanding both sides as Taylor's series, it can be verified that [math]\displaystyle{ \frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2} }[/math].
- [math]\displaystyle{ \square }[/math]
Apply the above lemma to the random variable
- [math]\displaystyle{ (Y_n \mid X_0,\ldots,X_{n-1}) }[/math]
We have already shown that its expectation [math]\displaystyle{ \mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0, }[/math] and by the bounded difference condition of Azuma's inequality, we have [math]\displaystyle{ |Y_n|=|(X_n-X_{n-1})|\le c_n. }[/math] Thus, due to the above lemma, it holds that
- [math]\displaystyle{ \mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}. }[/math]
Back to our analysis of the expectation [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math], we have
- [math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\ &= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] . \end{align} }[/math]
Apply the same analysis to [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_{n-1}}\right] }[/math], we can solve the above recursion by
- [math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\ &= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right). \end{align} }[/math]
Go back to the Markov's inequality,
- [math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right). \end{align} }[/math]
We then only need to choose a proper [math]\displaystyle{ \lambda\gt 0 }[/math].
Optimization
By choosing [math]\displaystyle{ \lambda=\frac{t}{\sum_{k=1}^n c_k^2} }[/math], we have that
- [math]\displaystyle{ \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). }[/math]
Thus, the probability
- [math]\displaystyle{ \begin{align} \Pr\left[X_n-X_0\ge t\right] &=\Pr\left[Z_n\ge t\right]\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\ &= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). \end{align} }[/math]
The upper tail of Azuma's inequality is proved. By replacing [math]\displaystyle{ X_i }[/math] by [math]\displaystyle{ -X_i }[/math], the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.