高级算法 (Fall 2019)/Concentration of measure: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>Etone
imported>Etone
No edit summary
Line 215: Line 215:
:<math>n\cdot \Pr\left[Y_1>2e\frac{m}{n}\right] = n\cdot \Pr\left[Y_1>2e\mu\right]\le \frac{1}{n}</math>.
:<math>n\cdot \Pr\left[Y_1>2e\frac{m}{n}\right] = n\cdot \Pr\left[Y_1>2e\mu\right]\le \frac{1}{n}</math>.
Therefore, for <math>m\ge n\ln n</math>, with high probability, the maximum load is <math>O\left(\frac{m}{n}\right)</math>.
Therefore, for <math>m\ge n\ln n</math>, with high probability, the maximum load is <math>O\left(\frac{m}{n}\right)</math>.
= Martingales =
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after <math>n</math> losses, if the <math>(n+1)</math>th bet wins, then it gives a net profit of
:<math>
2^n-\sum_{i=1}^{n}2^{i-1}=1,
</math>
which is a positive number.
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables <math>X_0,X_1,\ldots,</math>, where <math>X_0</math> is his initial capital, and <math>X_i</math> represents his capital after the <math>i</math>th betting. Up to different betting strategies, <math>X_i</math> can be arbitrarily dependent on <math>X_0,\ldots,X_{i-1}</math>. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables <math>X_0,\ldots,X_{i-1}</math>, we will expect no change in the value of the present variable <math>X_{i}</math> on average. Random variables satisfying this property is called a '''martingale''' sequence.
{{Theorem
|Definition (martingale)|
:A sequence of random variables <math>X_0,X_1,\ldots</math> is a '''martingale''' if for all <math>i> 0</math>,
:: <math>\begin{align}
\mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}.
\end{align}</math>
}}

Revision as of 05:59, 8 October 2019

Chernoff Bound

Suppose that we have a fair coin. If we toss it once, then the outcome is completely unpredictable. But if we toss it, say for 1000 times, then the number of HEADs is very likely to be around 500. This phenomenon, as illustrated in the following figure, is called the concentration of measure. The Chernoff bound is an inequality that characterizes the concentration phenomenon for the sum of independent trials.

Before formally stating the Chernoff bound, let's introduce the moment generating function.

Moment generating functions

The more we know about the moments of a random variable X, the more information we would have about X. There is a so-called moment generating function, which "packs" all the information about the moments of X into one function.

Definition
The moment generating function of a random variable X is defined as E[eλX] where λ is the parameter of the function.

By Taylor's expansion and the linearity of expectations,

E[eλX]=E[k=0λkk!Xk]=k=0λkk!E[Xk]

The moment generating function E[eλX] is a function of λ.

The Chernoff bound

The Chernoff bounds are exponentially sharp tail inequalities for the sum of independent trials. The bounds are obtained by applying Markov's inequality to the moment generating function of the sum of independent trials, with some appropriate choice of the parameter λ.

Chernoff bound (the upper tail)
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X].
Then for any δ>0,
Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ.
Proof.
For any λ>0, X(1+δ)μ is equivalent to that eλXeλ(1+δ)μ, thus
Pr[X(1+δ)μ]=Pr[eλXeλ(1+δ)μ]E[eλX]eλ(1+δ)μ,

where the last step follows by Markov's inequality.

Computing the moment generating function E[eλX]:

E[eλX]=E[eλi=1nXi]=E[i=1neλXi]=i=1nE[eλXi].(for independent random variables)

Let pi=Pr[Xi=1] for i=1,2,,n. Then,

μ=E[X]=E[i=1nXi]=i=1nE[Xi]=i=1npi.

We bound the moment generating function for each individual Xi as follows.

E[eλXi]=pieλ1+(1pi)eλ0=1+pi(eλ1)epi(eλ1),

where in the last step we apply the Taylor's expansion so that ey1+y where y=pi(eλ1)0. (By doing this, we can transform the product to the sum of pi, which is μ.)

Therefore,

E[eλX]=i=1nE[eλXi]i=1nepi(eλ1)=exp(i=1npi(eλ1))=e(eλ1)μ.

Thus, we have shown that for any λ>0,

Pr[X(1+δ)μ]E[eλX]eλ(1+δ)μe(eλ1)μeλ(1+δ)μ=(e(eλ1)eλ(1+δ))μ.

For any δ>0, we can let λ=ln(1+δ)>0 to get

Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ.

The idea of the proof is actually quite clear: we apply Markov's inequality to eλX and for the rest, we just estimate the moment generating function E[eλX]. To make the bound as tight as possible, we minimized the e(eλ1)eλ(1+δ) by setting λ=ln(1+δ), which can be justified by taking derivatives of e(eλ1)eλ(1+δ).


We then proceed to the lower tail, the probability that the random variable deviates below the mean value:

Chernoff bound (the lower tail)
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X].
Then for any 0<δ<1,
Pr[X(1δ)μ](eδ(1δ)(1δ))μ.
Proof.
For any λ<0, by the same analysis as in the upper tail version,
Pr[X(1δ)μ]=Pr[eλXeλ(1δ)μ]E[eλX]eλ(1δ)μ(e(eλ1)eλ(1δ))μ.

For any 0<δ<1, we can let λ=ln(1δ)<0 to get

Pr[X(1δ)μ](eδ(1δ)(1δ))μ.

Useful forms of the Chernoff bounds

Some useful special forms of the bounds can be derived directly from the above general forms of the bounds. We now know better why we say that the bounds are exponentially sharp.

Useful forms of the Chernoff bound
Let X=i=1nXi, where X1,X2,,Xn are independent Poisson trials. Let μ=E[X]. Then
1. for 0<δ1,
Pr[X(1+δ)μ]<exp(μδ23);
Pr[X(1δ)μ]<exp(μδ22);
2. for t2eμ,
Pr[Xt]2t.
Proof.
To obtain the bounds in (1), we need to show that for 0<δ<1, eδ(1+δ)(1+δ)eδ2/3 and eδ(1δ)(1δ)eδ2/2. We can verify both inequalities by standard analysis techniques.

To obtain the bound in (2), let t=(1+δ)μ. Then δ=t/μ12e1. Hence,

Pr[X(1+δ)μ](eδ(1+δ)(1+δ))μ(e1+δ)(1+δ)μ(e2e)t2t

Applications to balls-into-bins

Throwing m balls uniformly and independently to n bins, what is the maximum load of all bins with high probability? In the last class, we gave an analysis of this problem by using a counting argument.

Now we give a more "advanced" analysis by using Chernoff bounds.


For any i[n] and j[m], let Xij be the indicator variable for the event that ball j is thrown to bin i. Obviously

E[Xij]=Pr[ball j is thrown to bin i]=1n

Let Yi=j[m]Xij be the load of bin i.


Then the expected load of bin i is

()μ=E[Yi]=E[j[m]Xij]=j[m]E[Xij]=m/n.

For the case m=n, it holds that μ=1

Note that Yi is a sum of m mutually independent indicator variable. Applying Chernoff bound, for any particular bin i[n],

Pr[Yi>(1+δ)μ](eδ(1+δ)1+δ)μ.

The m=n case

When m=n, μ=1. Write c=1+δ. The above bound can be written as

Pr[Yi>c]ec1cc.

Let c=elnnlnlnn, we evaluate ec1cc by taking logarithm to its reciprocal.

ln(ccec1)=clncc+1=c(lnc1)+1=elnnlnlnn(lnlnnlnlnlnn)+1elnnlnlnn2elnlnn+12lnn.

Thus,

Pr[Yi>elnnlnlnn]1n2.

Applying the union bound, the probability that there exists a bin with load >12lnn is

nPr[Y1>elnnlnlnn]1n.

Therefore, for m=n, with high probability, the maximum load is O(elnnlnlnn).

The m>lnn case

When mnlnn, then according to (), μ=mnlnn

We can apply an easier form of the Chernoff bounds,

Pr[Yi2eμ]22eμ22elnn<1n2.

By the union bound, the probability that there exists a bin with load 2emn is,

nPr[Y1>2emn]=nPr[Y1>2eμ]1n.

Therefore, for mnlnn, with high probability, the maximum load is O(mn).

Martingales

"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after n losses, if the (n+1)th bet wins, then it gives a net profit of

2ni=1n2i1=1,

which is a positive number.

However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life.

Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables X0,X1,,, where X0 is his initial capital, and Xi represents his capital after the ith betting. Up to different betting strategies, Xi can be arbitrarily dependent on X0,,Xi1. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables X0,,Xi1, we will expect no change in the value of the present variable Xi on average. Random variables satisfying this property is called a martingale sequence.

Definition (martingale)
A sequence of random variables X0,X1, is a martingale if for all i>0,
E[XiX0,,Xi1]=Xi1.