随机算法 (Fall 2011)/Azuma's Inequality: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
Created page with 'We then introduce a martingale tail inequality, called Azuma's inequality. {{Theorem |Azuma's Inequality| :Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <ma…'
 
imported>WikiSysop
No edit summary
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
We then introduce a martingale tail inequality, called Azuma's inequality.
We introduce a martingale tail inequality, called Azuma's inequality.


{{Theorem
{{Theorem
Line 40: Line 40:
In order to bound the probability of <math>|X_n-X_0|\ge t</math>, we first bound the upper tail <math>\Pr[X_n-X_0\ge t]</math>. The bound of the lower tail can be symmetrically proved with the <math>X_i</math> replaced by <math>-X_i</math>.
In order to bound the probability of <math>|X_n-X_0|\ge t</math>, we first bound the upper tail <math>\Pr[X_n-X_0\ge t]</math>. The bound of the lower tail can be symmetrically proved with the <math>X_i</math> replaced by <math>-X_i</math>.


= Represent the deviation as the sum of differences =
=== Represent the deviation as the sum of differences ===
We define the '''martingale difference sequence''': for <math>i\ge 1</math>, let
We define the '''martingale difference sequence''': for <math>i\ge 1</math>, let
:<math>
:<math>
Line 74: Line 74:
We then only need to upper bound the probability of the event <math>Z_n\ge t</math>.
We then only need to upper bound the probability of the event <math>Z_n\ge t</math>.


= Apply Markov's inequality to the moment generating function =
=== Apply Markov's inequality to the moment generating function ===
The event <math>Z_n\ge t</math> is equivalent to that <math>e^{\lambda Z_n}\ge e^{\lambda t}</math> for <math>\lambda>0</math>. Apply Markov's inequality, we have
The event <math>Z_n\ge t</math> is equivalent to that <math>e^{\lambda Z_n}\ge e^{\lambda t}</math> for <math>\lambda>0</math>. Apply Markov's inequality, we have
:<math>
:<math>
Line 85: Line 85:
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>.
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>.


= Bound the moment generating functions =
=== Bound the moment generating functions ===
The moment generating function
The moment generating function
:<math>
:<math>
Line 177: Line 177:
We then only need to choose a proper <math>\lambda>0</math>.
We then only need to choose a proper <math>\lambda>0</math>.


= Optimization =
=== Optimization ===
By choosing <math>\lambda=\frac{t}{\sum_{k=1}^n c_k^2}</math>, we have that  
By choosing <math>\lambda=\frac{t}{\sum_{k=1}^n c_k^2}</math>, we have that  
:<math>
:<math>

Latest revision as of 03:03, 19 July 2011

We introduce a martingale tail inequality, called Azuma's inequality.

Azuma's Inequality
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right). \end{align} }[/math]

Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.

Second, the condition that

[math]\displaystyle{ |X_{k}-X_{k-1}|\le c_k }[/math]

is central to the proof. This condition is sometimes called the bounded difference condition. If we think of the martingale [math]\displaystyle{ X_0,X_1,\ldots }[/math] as a process evolving through time, where [math]\displaystyle{ X_i }[/math] gives some measurement at time [math]\displaystyle{ i }[/math], the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.

A special case is when the differences are bounded by a constant. The following corollary is directly implied by the Azuma's inequality.

Corollary
Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a martingale such that, for all [math]\displaystyle{ k\ge 1 }[/math],
[math]\displaystyle{ |X_{k}-X_{k-1}|\le c, }[/math]
Then
[math]\displaystyle{ \begin{align} \Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}. \end{align} }[/math]

This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates [math]\displaystyle{ \omega(\sqrt{n}) }[/math] far away from the starting point after [math]\displaystyle{ n }[/math] steps is bounded by [math]\displaystyle{ o(1) }[/math].

The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.

In order to bound the probability of [math]\displaystyle{ |X_n-X_0|\ge t }[/math], we first bound the upper tail [math]\displaystyle{ \Pr[X_n-X_0\ge t] }[/math]. The bound of the lower tail can be symmetrically proved with the [math]\displaystyle{ X_i }[/math] replaced by [math]\displaystyle{ -X_i }[/math].

Represent the deviation as the sum of differences

We define the martingale difference sequence: for [math]\displaystyle{ i\ge 1 }[/math], let

[math]\displaystyle{ Y_i=X_i-X_{i-1}. }[/math]

It holds that

[math]\displaystyle{ \begin{align} \mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}] &=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\ &=X_{i-1}-X_{i-1}\\ &=0. \end{align} }[/math]

The second to the last equation is due to the fact that [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a martingale and the definition of conditional expectation.

Let [math]\displaystyle{ Z_n }[/math] be the accumulated differences

[math]\displaystyle{ Z_n=\sum_{i=1}^n Y_i. }[/math]

The deviation [math]\displaystyle{ (X_n-X_0) }[/math] can be computed by the accumulated differences:

[math]\displaystyle{ \begin{align} X_n-X_0 &=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\ &=\sum_{i=1}^n Y_i\\ &=Z_n. \end{align} }[/math]

We then only need to upper bound the probability of the event [math]\displaystyle{ Z_n\ge t }[/math].

Apply Markov's inequality to the moment generating function

The event [math]\displaystyle{ Z_n\ge t }[/math] is equivalent to that [math]\displaystyle{ e^{\lambda Z_n}\ge e^{\lambda t} }[/math] for [math]\displaystyle{ \lambda\gt 0 }[/math]. Apply Markov's inequality, we have

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\ &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}. \end{align} }[/math]

This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math].

Bound the moment generating functions

The moment generating function

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right] \end{align} }[/math]

The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.

We then upper bound the [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right] }[/math] by a constant. To do so, we need the following technical lemma which is proved by the convexity of [math]\displaystyle{ e^{\lambda Y_n} }[/math].

Lemma
Let [math]\displaystyle{ X }[/math] be a random variable such that [math]\displaystyle{ \mathbf{E}[X]=0 }[/math] and [math]\displaystyle{ |X|\le c }[/math]. Then for [math]\displaystyle{ \lambda\gt 0 }[/math],
[math]\displaystyle{ \mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}. }[/math]
Proof.
Observe that for [math]\displaystyle{ \lambda\gt 0 }[/math], the function [math]\displaystyle{ e^{\lambda X} }[/math] of the variable [math]\displaystyle{ X }[/math] is convex in the interval [math]\displaystyle{ [-c,c] }[/math]. We draw a line between the two endpoints points [math]\displaystyle{ (-c, e^{-\lambda c}) }[/math] and [math]\displaystyle{ (c, e^{\lambda c}) }[/math]. The curve of [math]\displaystyle{ e^{\lambda X} }[/math] lies entirely below this line. Thus,
[math]\displaystyle{ \begin{align} e^{\lambda X} &\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}). \end{align} }[/math]

Since [math]\displaystyle{ \mathbf{E}[X]=0 }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}[e^{\lambda X}] &\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\ &=\frac{e^{\lambda c}+e^{-\lambda c}}{2}. \end{align} }[/math]

By expanding both sides as Taylor's series, it can be verified that [math]\displaystyle{ \frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2} }[/math].

[math]\displaystyle{ \square }[/math]

Apply the above lemma to the random variable

[math]\displaystyle{ (Y_n \mid X_0,\ldots,X_{n-1}) }[/math]

We have already shown that its expectation [math]\displaystyle{ \mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0, }[/math] and by the bounded difference condition of Azuma's inequality, we have [math]\displaystyle{ |Y_n|=|(X_n-X_{n-1})|\le c_n. }[/math] Thus, due to the above lemma, it holds that

[math]\displaystyle{ \mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}. }[/math]

Back to our analysis of the expectation [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_n}\right] }[/math], we have

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\ &\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\ &= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] . \end{align} }[/math]

Apply the same analysis to [math]\displaystyle{ \mathbf{E}\left[e^{\lambda Z_{n-1}}\right] }[/math], we can solve the above recursion by

[math]\displaystyle{ \begin{align} \mathbf{E}\left[e^{\lambda Z_n}\right] &\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\ &= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right). \end{align} }[/math]

Go back to the Markov's inequality,

[math]\displaystyle{ \begin{align} \Pr\left[Z_n\ge t\right] &\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right). \end{align} }[/math]

We then only need to choose a proper [math]\displaystyle{ \lambda\gt 0 }[/math].

Optimization

By choosing [math]\displaystyle{ \lambda=\frac{t}{\sum_{k=1}^n c_k^2} }[/math], we have that

[math]\displaystyle{ \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). }[/math]

Thus, the probability

[math]\displaystyle{ \begin{align} \Pr\left[X_n-X_0\ge t\right] &=\Pr\left[Z_n\ge t\right]\\ &\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\ &= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right). \end{align} }[/math]

The upper tail of Azuma's inequality is proved. By replacing [math]\displaystyle{ X_i }[/math] by [math]\displaystyle{ -X_i }[/math], the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.