Combinatorics (Fall 2010)/Random graphs: Difference between revisions
imported>WikiSysop No edit summary |
imported>WikiSysop No edit summary |
||
Line 1: | Line 1: | ||
== The Moment Methods == | |||
=== Markov's inequality === | |||
One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation. | |||
{{Theorem | |||
|Theorem (Markov's Inequality)| | |||
:Let <math>X</math> be a random variable assuming only nonnegative values. Then, for all <math>t>0</math>, | |||
::<math>\begin{align} | |||
\Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}. | |||
\end{align}</math> | |||
}} | |||
{{Proof| Let <math>Y</math> be the indicator such that | |||
:<math>\begin{align} | |||
Y &= | |||
\begin{cases} | |||
1 & \mbox{if }X\ge t,\\ | |||
0 & \mbox{otherwise.} | |||
\end{cases} | |||
\end{align}</math> | |||
It holds that <math>Y\le\frac{X}{t}</math>. Since <math>Y</math> is 0-1 valued, <math>\mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t]</math>. Therefore, | |||
:<math> | |||
\Pr[X\ge t] | |||
= | |||
\mathbf{E}[Y] | |||
\le | |||
\mathbf{E}\left[\frac{X}{t}\right] | |||
=\frac{\mathbf{E}[X]}{t}. | |||
</math> | |||
}} | |||
;Example (from Las Vegas to Monte Carlo) | |||
Let <math>A</math> be a Las Vegas randomized algorithm for a decision problem <math>f</math>, whose expected running time is within <math>T(n)</math> on any input of size <math>n</math>. We transform <math>A</math> to a Monte Carlo randomized algorithm <math>B</math> with bounded one-sided error as follows: | |||
:<math>B(x)</math>: | |||
:*Run <math>A(x)</math> for <math>2T(n)</math> long where <math>n</math> is the size of <math>x</math>. | |||
:*If <math>A(x)</math> returned within <math>2T(n)</math> time, then return what <math>A(x)</math> just returned, else return 1. | |||
Since <math>A</math> is Las Vegas, its output is always correct, thus <math>B(x)</math> only errs when it returns 1, thus the error is one-sided. The error probability is bounded by the probability that <math>A(x)</math> runs longer than <math>2T(n)</math>. Since the expected running time of <math>A(x)</math> is at most <math>T(n)</math>, due to Markov's inequality, | |||
:<math> | |||
\Pr[\mbox{the running time of }A(x)\ge2T(n)]\le\frac{\mathbf{E}[\mbox{running time of }A(x)]}{2T(n)}\le\frac{1}{2}, | |||
</math> | |||
thus the error probability is bounded. | |||
This easy reduction implies that '''ZPP'''<math>\subseteq</math>'''RP'''. | |||
==== Generalization ==== | |||
For any random variable <math>X</math>, for an arbitrary non-negative real function <math>h</math>, the <math>h(X)</math> is a non-negative random variable. Applying Markov's inequality, we directly have that | |||
:<math> | |||
\Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}. | |||
</math> | |||
This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function <math>h</math> which extracts more information about the random variable, we can prove sharper tail inequalities. | |||
=== Variance === | |||
{{Theorem | |||
|Definition (variance)| | |||
:The '''variance''' of a random variable <math>X</math> is defined as | |||
::<math>\begin{align} | |||
\mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2. | |||
\end{align}</math> | |||
:The '''standard deviation''' of random variable <math>X</math> is | |||
::<math> | |||
\delta[X]=\sqrt{\mathbf{Var}[X]}. | |||
</math> | |||
}} | |||
We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance. | |||
{{Theorem | |||
|Definition (covariance)| | |||
:The '''covariance''' of two random variables <math>X</math> and <math>Y</math> is | |||
::<math>\begin{align} | |||
\mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]. | |||
\end{align}</math> | |||
}} | |||
We have the following theorem for the variance of sum. | |||
{{Theorem | |||
|Theorem| | |||
:For any two random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y). | |||
\end{align}</math> | |||
:Generally, for any random variables <math>X_1,X_2,\ldots,X_n</math>, | |||
::<math>\begin{align} | |||
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j). | |||
\end{align}</math> | |||
}} | |||
{{Proof| The equation for two variables is directly due to the definition of variance and covariance. The equation for <math>n</math> variables can be deduced from the equation for two variables. | |||
}} | |||
We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity. | |||
{{Theorem | |||
|Theorem| | |||
:For any two independent random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y]. | |||
\end{align}</math> | |||
}} | |||
{{Proof| | |||
:<math> | |||
\begin{align} | |||
\mathbf{E}[X\cdot Y] | |||
&= | |||
\sum_{x,y}xy\Pr[X=x\wedge Y=y]\\ | |||
&= | |||
\sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\ | |||
&= | |||
\sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\ | |||
&= | |||
\mathbf{E}[X]\cdot\mathbf{E}[Y]. | |||
\end{align} | |||
</math> | |||
}} | |||
With the above theorem, we can show that the covariance of two independent variables is always zero. | |||
{{Theorem | |||
|Theorem| | |||
:For any two independent random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{Cov}(X,Y)=0. | |||
\end{align}</math> | |||
}} | |||
{{Proof| | |||
:<math>\begin{align} | |||
\mathbf{Cov}(X,Y) | |||
&=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\ | |||
&= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\ | |||
&=0. | |||
\end{align}</math> | |||
}} | |||
We then have the following theorem for the variance of the sum of pairwise independent random variables. | |||
{{Theorem | |||
|Theorem| | |||
:For '''pairwise''' independent random variables <math>X_1,X_2,\ldots,X_n</math>, | |||
::<math>\begin{align} | |||
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]. | |||
\end{align}</math> | |||
}} | |||
;Remark | |||
:The theorem holds for '''pairwise''' independent random variables, a much weaker independence requirement than the '''mutual''' independence. This makes the variance-based probability tools work even for weakly random cases. We will see what it exactly means in the future lectures. | |||
==== Variance of binomial distribution ==== | |||
For a Bernoulli trial with parameter <math>p</math>. | |||
:<math> | |||
X=\begin{cases} | |||
1& \mbox{with probability }p\\ | |||
0& \mbox{with probability }1-p | |||
\end{cases} | |||
</math> | |||
The variance is | |||
:<math> | |||
\mathbf{Var}[X]=\mathbf{E}[X^2]-(\mathbf{E}[X])^2=\mathbf{E}[X]-(\mathbf{E}[X])^2=p-p^2=p(1-p). | |||
</math> | |||
Let <math>Y</math> be a binomial random variable with parameter <math>n</math> and <math>p</math>, i.e. <math>Y=\sum_{i=1}^nY_i</math>, where <math>Y_i</math>'s are i.i.d. Bernoulli trials with parameter <math>p</math>. The variance is | |||
:<math> | |||
\begin{align} | |||
\mathbf{Var}[Y] | |||
&= | |||
\mathbf{Var}\left[\sum_{i=1}^nY_i\right]\\ | |||
&= | |||
\sum_{i=1}^n\mathbf{Var}\left[Y_i\right] &\qquad (\mbox{Independence})\\ | |||
&= | |||
\sum_{i=1}^np(1-p) &\qquad (\mbox{Bernoulli})\\ | |||
&= | |||
p(1-p)n. | |||
\end{align} | |||
</math> | |||
=== Chebyshev's inequality === | |||
With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality. | |||
{{Theorem | |||
|Theorem (Chebyshev's Inequality)| | |||
:For any <math>t>0</math>, | |||
::<math>\begin{align} | |||
\Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}. | |||
\end{align}</math> | |||
}} | |||
{{Proof| Observe that | |||
:<math>\Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2].</math> | |||
Since <math>(X-\mathbf{E}[X])^2</math> is a nonnegative random variable, we can apply Markov's inequality, such that | |||
:<math> | |||
\Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le | |||
\frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2} | |||
=\frac{\mathbf{Var}[X]}{t^2}. | |||
</math> | |||
}} | |||
=== Higher moments === | |||
The above two inequalities can be put into a general framework regarding the [http://en.wikipedia.org/wiki/Moment_(mathematics) '''moments'''] of random variables. | |||
{{Theorem | |||
|Definition (moments)| | |||
:The <math>k</math>th moment of a random variable <math>X</math> is <math>\mathbf{E}[X^k]</math>. | |||
}} | |||
The more we know about the moments, the more information we have about the distribution, hence in principle, we can get tighter tail bounds. This technique is called the <math>k</math>th moment method. | |||
We know that the <math>k</math>th moment is <math>\mathbf{E}[X^k]</math>. More generally, | |||
the <math>k</math>th moment about <math>c</math> is <math>\mathbf{E}[(X-c)^k]</math>. The [http://en.wikipedia.org/wiki/Central_moment central moment] of <math>X</math>, denoted <math>\mu_k[X]</math>, is defined as <math>\mu_k[X]=\mathbf{E}[(X-\mathbf{E}[X])^k]</math>. So the variance is just the second central moment <math>\mu_2[X]</math>. | |||
The <math>k</math>th moment method is stated by the following theorem. | |||
{{Theorem | |||
|Theorem (the <math>k</math>th moment method)| | |||
:For even <math>k>0</math>, and any <math>t>0</math>, | |||
::<math>\begin{align} | |||
\Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mu_k[X]}{t^k}. | |||
\end{align}</math> | |||
}} | |||
{{Proof| Apply Markov's inequality to <math>(X-\mathbf{E}[X])^k</math>. | |||
}} | |||
How about the odd <math>k</math>? For odd <math>k</math>, we should apply Markov's inequality to <math>|X-\mathbf{E}[X]|^k</math>, but estimating expectations of absolute values can be hard. | |||
== Erdős–Rényi Random Graphs == | == Erdős–Rényi Random Graphs == | ||
Revision as of 04:46, 7 October 2010
The Moment Methods
Markov's inequality
One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation.
Theorem (Markov's Inequality) - Let [math]\displaystyle{ X }[/math] be a random variable assuming only nonnegative values. Then, for all [math]\displaystyle{ t\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}. \end{align} }[/math]
- Let [math]\displaystyle{ X }[/math] be a random variable assuming only nonnegative values. Then, for all [math]\displaystyle{ t\gt 0 }[/math],
Proof. Let [math]\displaystyle{ Y }[/math] be the indicator such that - [math]\displaystyle{ \begin{align} Y &= \begin{cases} 1 & \mbox{if }X\ge t,\\ 0 & \mbox{otherwise.} \end{cases} \end{align} }[/math]
It holds that [math]\displaystyle{ Y\le\frac{X}{t} }[/math]. Since [math]\displaystyle{ Y }[/math] is 0-1 valued, [math]\displaystyle{ \mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t] }[/math]. Therefore,
- [math]\displaystyle{ \Pr[X\ge t] = \mathbf{E}[Y] \le \mathbf{E}\left[\frac{X}{t}\right] =\frac{\mathbf{E}[X]}{t}. }[/math]
- [math]\displaystyle{ \square }[/math]
- Example (from Las Vegas to Monte Carlo)
Let [math]\displaystyle{ A }[/math] be a Las Vegas randomized algorithm for a decision problem [math]\displaystyle{ f }[/math], whose expected running time is within [math]\displaystyle{ T(n) }[/math] on any input of size [math]\displaystyle{ n }[/math]. We transform [math]\displaystyle{ A }[/math] to a Monte Carlo randomized algorithm [math]\displaystyle{ B }[/math] with bounded one-sided error as follows:
- [math]\displaystyle{ B(x) }[/math]:
- Run [math]\displaystyle{ A(x) }[/math] for [math]\displaystyle{ 2T(n) }[/math] long where [math]\displaystyle{ n }[/math] is the size of [math]\displaystyle{ x }[/math].
- If [math]\displaystyle{ A(x) }[/math] returned within [math]\displaystyle{ 2T(n) }[/math] time, then return what [math]\displaystyle{ A(x) }[/math] just returned, else return 1.
Since [math]\displaystyle{ A }[/math] is Las Vegas, its output is always correct, thus [math]\displaystyle{ B(x) }[/math] only errs when it returns 1, thus the error is one-sided. The error probability is bounded by the probability that [math]\displaystyle{ A(x) }[/math] runs longer than [math]\displaystyle{ 2T(n) }[/math]. Since the expected running time of [math]\displaystyle{ A(x) }[/math] is at most [math]\displaystyle{ T(n) }[/math], due to Markov's inequality,
- [math]\displaystyle{ \Pr[\mbox{the running time of }A(x)\ge2T(n)]\le\frac{\mathbf{E}[\mbox{running time of }A(x)]}{2T(n)}\le\frac{1}{2}, }[/math]
thus the error probability is bounded.
This easy reduction implies that ZPP[math]\displaystyle{ \subseteq }[/math]RP.
Generalization
For any random variable [math]\displaystyle{ X }[/math], for an arbitrary non-negative real function [math]\displaystyle{ h }[/math], the [math]\displaystyle{ h(X) }[/math] is a non-negative random variable. Applying Markov's inequality, we directly have that
- [math]\displaystyle{ \Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}. }[/math]
This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function [math]\displaystyle{ h }[/math] which extracts more information about the random variable, we can prove sharper tail inequalities.
Variance
Definition (variance) - The variance of a random variable [math]\displaystyle{ X }[/math] is defined as
- [math]\displaystyle{ \begin{align} \mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2. \end{align} }[/math]
- The standard deviation of random variable [math]\displaystyle{ X }[/math] is
- [math]\displaystyle{ \delta[X]=\sqrt{\mathbf{Var}[X]}. }[/math]
- The variance of a random variable [math]\displaystyle{ X }[/math] is defined as
We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance.
Definition (covariance) - The covariance of two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is
- [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]. \end{align} }[/math]
- The covariance of two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is
We have the following theorem for the variance of sum.
Theorem - For any two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y). \end{align} }[/math]
- Generally, for any random variables [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j). \end{align} }[/math]
- For any two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. The equation for two variables is directly due to the definition of variance and covariance. The equation for [math]\displaystyle{ n }[/math] variables can be deduced from the equation for two variables.
- [math]\displaystyle{ \square }[/math]
We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity.
Theorem - For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
- For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. - [math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y] &= \sum_{x,y}xy\Pr[X=x\wedge Y=y]\\ &= \sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\ &= \sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\ &= \mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
With the above theorem, we can show that the covariance of two independent variables is always zero.
Theorem - For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=0. \end{align} }[/math]
- For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. - [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y) &=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\ &= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\ &=0. \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
We then have the following theorem for the variance of the sum of pairwise independent random variables.
Theorem - For pairwise independent random variables [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]. \end{align} }[/math]
- For pairwise independent random variables [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math],
- Remark
- The theorem holds for pairwise independent random variables, a much weaker independence requirement than the mutual independence. This makes the variance-based probability tools work even for weakly random cases. We will see what it exactly means in the future lectures.
Variance of binomial distribution
For a Bernoulli trial with parameter [math]\displaystyle{ p }[/math].
- [math]\displaystyle{ X=\begin{cases} 1& \mbox{with probability }p\\ 0& \mbox{with probability }1-p \end{cases} }[/math]
The variance is
- [math]\displaystyle{ \mathbf{Var}[X]=\mathbf{E}[X^2]-(\mathbf{E}[X])^2=\mathbf{E}[X]-(\mathbf{E}[X])^2=p-p^2=p(1-p). }[/math]
Let [math]\displaystyle{ Y }[/math] be a binomial random variable with parameter [math]\displaystyle{ n }[/math] and [math]\displaystyle{ p }[/math], i.e. [math]\displaystyle{ Y=\sum_{i=1}^nY_i }[/math], where [math]\displaystyle{ Y_i }[/math]'s are i.i.d. Bernoulli trials with parameter [math]\displaystyle{ p }[/math]. The variance is
- [math]\displaystyle{ \begin{align} \mathbf{Var}[Y] &= \mathbf{Var}\left[\sum_{i=1}^nY_i\right]\\ &= \sum_{i=1}^n\mathbf{Var}\left[Y_i\right] &\qquad (\mbox{Independence})\\ &= \sum_{i=1}^np(1-p) &\qquad (\mbox{Bernoulli})\\ &= p(1-p)n. \end{align} }[/math]
Chebyshev's inequality
With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality.
Theorem (Chebyshev's Inequality) - For any [math]\displaystyle{ t\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}. \end{align} }[/math]
- For any [math]\displaystyle{ t\gt 0 }[/math],
Proof. Observe that - [math]\displaystyle{ \Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2]. }[/math]
Since [math]\displaystyle{ (X-\mathbf{E}[X])^2 }[/math] is a nonnegative random variable, we can apply Markov's inequality, such that
- [math]\displaystyle{ \Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le \frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2} =\frac{\mathbf{Var}[X]}{t^2}. }[/math]
- [math]\displaystyle{ \square }[/math]
Higher moments
The above two inequalities can be put into a general framework regarding the moments of random variables.
Definition (moments) - The [math]\displaystyle{ k }[/math]th moment of a random variable [math]\displaystyle{ X }[/math] is [math]\displaystyle{ \mathbf{E}[X^k] }[/math].
The more we know about the moments, the more information we have about the distribution, hence in principle, we can get tighter tail bounds. This technique is called the [math]\displaystyle{ k }[/math]th moment method.
We know that the [math]\displaystyle{ k }[/math]th moment is [math]\displaystyle{ \mathbf{E}[X^k] }[/math]. More generally, the [math]\displaystyle{ k }[/math]th moment about [math]\displaystyle{ c }[/math] is [math]\displaystyle{ \mathbf{E}[(X-c)^k] }[/math]. The central moment of [math]\displaystyle{ X }[/math], denoted [math]\displaystyle{ \mu_k[X] }[/math], is defined as [math]\displaystyle{ \mu_k[X]=\mathbf{E}[(X-\mathbf{E}[X])^k] }[/math]. So the variance is just the second central moment [math]\displaystyle{ \mu_2[X] }[/math].
The [math]\displaystyle{ k }[/math]th moment method is stated by the following theorem.
Theorem (the [math]\displaystyle{ k }[/math]th moment method) - For even [math]\displaystyle{ k\gt 0 }[/math], and any [math]\displaystyle{ t\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mu_k[X]}{t^k}. \end{align} }[/math]
- For even [math]\displaystyle{ k\gt 0 }[/math], and any [math]\displaystyle{ t\gt 0 }[/math],
Proof. Apply Markov's inequality to [math]\displaystyle{ (X-\mathbf{E}[X])^k }[/math].
- [math]\displaystyle{ \square }[/math]
How about the odd [math]\displaystyle{ k }[/math]? For odd [math]\displaystyle{ k }[/math], we should apply Markov's inequality to [math]\displaystyle{ |X-\mathbf{E}[X]|^k }[/math], but estimating expectations of absolute values can be hard.
Erdős–Rényi Random Graphs
The probabilistic method
Coloring large-girth graphs
Definition Let [math]\displaystyle{ G(V,E) }[/math] be an undirected graph.
- A cycle of length [math]\displaystyle{ k }[/math] in [math]\displaystyle{ G }[/math] is a sequence of distinct vertices [math]\displaystyle{ v_1,v_2,\ldots,v_{k} }[/math] such that [math]\displaystyle{ v_iv_{i+1}\in E }[/math] for all [math]\displaystyle{ i=1,2,\ldots,k-1 }[/math] and [math]\displaystyle{ v_kv_1\in E }[/math].
- The girth of [math]\displaystyle{ G }[/math], dented [math]\displaystyle{ g(G) }[/math], is the length of the shortest cycle in [math]\displaystyle{ G }[/math].
- The chromatic number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \chi(G) }[/math], is the minimal number of colors which we need to color the vertices of [math]\displaystyle{ G }[/math] so that no two adjacent vertices have the same color. Formally,
- [math]\displaystyle{ \chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\} }[/math].
- The independence number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \alpha(G) }[/math], is the size of the largest independent set in [math]\displaystyle{ G }[/math]. Formally,
- [math]\displaystyle{ \alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\} }[/math].
Theorem (Erdős 1959) - For all [math]\displaystyle{ k,\ell }[/math] there exists a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k\, }[/math].
Expander graphs
Consider an undirected (multi)graph [math]\displaystyle{ G(V,E) }[/math], where the parallel edges between two vertices are allowed.
Some notations:
- For [math]\displaystyle{ S,T\subset V }[/math], let [math]\displaystyle{ E(S,T)=\{uv\in E\mid u\in S,v\in T\} }[/math].
- The Edge Boundary of a set [math]\displaystyle{ S\subset V }[/math], denoted [math]\displaystyle{ \partial S\, }[/math], is [math]\displaystyle{ \partial S = E(S, \bar{S}) }[/math].
Definition (Graph expansion) - The expansion ratio of an undirected graph [math]\displaystyle{ G }[/math] on [math]\displaystyle{ n }[/math] vertices, is defined as
- [math]\displaystyle{ \phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}} \frac{|\partial S|}{|S|}. }[/math]
- The expansion ratio of an undirected graph [math]\displaystyle{ G }[/math] on [math]\displaystyle{ n }[/math] vertices, is defined as
Expander graphs are [math]\displaystyle{ d }[/math]-regular (multi)graphs with [math]\displaystyle{ d=O(1) }[/math] and [math]\displaystyle{ \phi(G)=\Omega(1) }[/math].
This definition states the following properties of expander graphs:
- Expander graphs are sparse graphs. This is because the number of edges is [math]\displaystyle{ dn/2=O(n) }[/math].
- Despite the sparsity, expander graphs have good connectivity. This is supported by the expansion ratio.
- This one is implicit: expander graph is a family of graphs [math]\displaystyle{ \{G_n\} }[/math], where [math]\displaystyle{ n }[/math] is the number of vertices. The asymptotic order [math]\displaystyle{ O(1) }[/math] and [math]\displaystyle{ \Omega(1) }[/math] in the definition is relative to the number of vertices [math]\displaystyle{ n }[/math], which grows to infinity.
For a vertex set [math]\displaystyle{ S }[/math], the size of the edge boundary [math]\displaystyle{ |\partial S| }[/math] can be seen as the "perimeter" of [math]\displaystyle{ S }[/math], and [math]\displaystyle{ |S| }[/math] can be seen as the "volume" of [math]\displaystyle{ S }[/math]. The expansion property can be interpreted as a combinatorial version of isoperimetric inequality.
We will show the existence of expander graphs by the probabilistic method. In order to do so, we need to generate random [math]\displaystyle{ d }[/math]-regular graphs.
Suppose that [math]\displaystyle{ d }[/math] is even. We can generate a random [math]\displaystyle{ d }[/math]-regular graph [math]\displaystyle{ G(V,E) }[/math] as follows:
- Let [math]\displaystyle{ V }[/math] be the vertex set. Uniformly and independently choose [math]\displaystyle{ \frac{d}{2} }[/math] cycles of [math]\displaystyle{ V }[/math].
- For each vertex [math]\displaystyle{ v }[/math], for every cycle, assuming that the two neighbors of [math]\displaystyle{ v }[/math] in that cycle is [math]\displaystyle{ w }[/math] and [math]\displaystyle{ u }[/math], add two edges [math]\displaystyle{ wv }[/math] and [math]\displaystyle{ uv }[/math] to [math]\displaystyle{ E }[/math].
The resulting [math]\displaystyle{ G(V,E) }[/math] is a multigraph. That is, it may have multiple edges between two vertices. We will show that [math]\displaystyle{ G(V,E) }[/math] is an expander graph with high probability. Formally, for some constant [math]\displaystyle{ d }[/math] and constant [math]\displaystyle{ \alpha }[/math],
- [math]\displaystyle{ \Pr[\phi(G)\ge \alpha]=1-o(1) }[/math].
By the probabilistic method, this shows that there exist expander graphs. In fact, the above probability bound shows something much stronger: it shows that almost every regular graph is an expander.
Recall that [math]\displaystyle{ \phi(G)=\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|} }[/math]. We call such [math]\displaystyle{ S\subset V }[/math] that [math]\displaystyle{ \frac{|\partial S|}{|S|}\lt \alpha }[/math] a "bad [math]\displaystyle{ S }[/math]". Then [math]\displaystyle{ \phi(G)\lt \alpha }[/math] if and only if there exists a bad [math]\displaystyle{ S }[/math] of size at most [math]\displaystyle{ \frac{n}{2} }[/math]. Therefore,
- [math]\displaystyle{ \begin{align} \Pr[\phi(G)\lt \alpha] &= \Pr\left[\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|}\lt \alpha\right]\\ &= \sum_{k=1}^\frac{n}{2}\Pr[\,\exists \mbox{bad }S\mbox{ of size }k\,]\\ &\le \sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,] \end{align} }[/math]
Let [math]\displaystyle{ R\subset S }[/math] be the set of vertices in [math]\displaystyle{ S }[/math] which has neighbors in [math]\displaystyle{ \bar{S} }[/math], and let [math]\displaystyle{ r=|R| }[/math]. It is obvious that [math]\displaystyle{ |\partial S|\ge r }[/math], thus, for a bad [math]\displaystyle{ S }[/math], [math]\displaystyle{ r\lt \alpha k }[/math]. Therefore, there are at most [math]\displaystyle{ \sum_{r=1}^{\alpha k}{k \choose r} }[/math] possible choices such [math]\displaystyle{ R }[/math]. For any fixed choice of [math]\displaystyle{ R }[/math], the probability that an edge picked by a vertex in [math]\displaystyle{ S-R }[/math] connects to a vertex in [math]\displaystyle{ S }[/math] is at most [math]\displaystyle{ k/n }[/math], and there are [math]\displaystyle{ d(k-r) }[/math] such edges. For any fixed [math]\displaystyle{ S }[/math] of size [math]\displaystyle{ k }[/math] and [math]\displaystyle{ R }[/math] of size [math]\displaystyle{ r }[/math], the probability that all neighbors of all vertices in [math]\displaystyle{ S-R }[/math] are in [math]\displaystyle{ S }[/math] is at most [math]\displaystyle{ \left(\frac{k}{n}\right)^{d(k-r)} }[/math]. Due to the union bound, for any fixed [math]\displaystyle{ S }[/math] of size [math]\displaystyle{ k }[/math],
- [math]\displaystyle{ \begin{align} \Pr[\,S\mbox{ is bad}\,] &\le \sum_{r=1}^{\alpha k}{k \choose r}\left(\frac{k}{n}\right)^{d(k-r)} \le \alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)} \end{align} }[/math]
Therefore,
- [math]\displaystyle{ \begin{align} \Pr[\phi(G)\lt \alpha] &\le \sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,]\\ &\le \sum_{k=1}^\frac{n}{2}{n\choose k}\alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)} \\ &\le \sum_{k=1}^\frac{n}{2}\left(\frac{en}{k}\right)^k\alpha k \left(\frac{ek}{\alpha k}\right)^{\alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)}&\quad (\mbox{Stirling formula }{n\choose k}\le\left(\frac{en}{k}\right)^k)\\ &\le \sum_{k=1}^\frac{n}{2}\exp(O(k))\left(\frac{k}{n}\right)^{k(d(1-\alpha)-1)}. \end{align} }[/math]
The last line is [math]\displaystyle{ o(1) }[/math] when [math]\displaystyle{ d\ge\frac{2}{1-\alpha} }[/math]. Therefore, [math]\displaystyle{ G }[/math] is an expander graph with expansion ratio [math]\displaystyle{ \alpha }[/math] with high probability for suitable choices of constant [math]\displaystyle{ d }[/math] and constant [math]\displaystyle{ \alpha }[/math].
Monotone properties
Definition - Let [math]\displaystyle{ \mathcal{G}_n=2^{V\choose 2} }[/math], where [math]\displaystyle{ |V|=n }[/math], be the set of all possible graphs on [math]\displaystyle{ n }[/math] vertices. A graph property is a boolean function [math]\displaystyle{ P:\mathcal{G}_n\rightarrow\{0,1\} }[/math] which is invariant under permutation of vertices, i.e. [math]\displaystyle{ P(G)=P(H) }[/math] whenever [math]\displaystyle{ G }[/math] is isomorphic to [math]\displaystyle{ H }[/math].
Definition - A graph property [math]\displaystyle{ P }[/math] is monotone if for any [math]\displaystyle{ G\subseteq H }[/math], both on [math]\displaystyle{ n }[/math] vertices, [math]\displaystyle{ G }[/math] having property [math]\displaystyle{ P }[/math] implies [math]\displaystyle{ H }[/math] having property [math]\displaystyle{ P }[/math].
Theorem - Let [math]\displaystyle{ P }[/math] be a monotone graph property. Suppose [math]\displaystyle{ G_1=G(n,p_1) }[/math], [math]\displaystyle{ G_2=G(n,p_2) }[/math], and [math]\displaystyle{ 0\le p_1\le p_2\le 1 }[/math]. Then
- [math]\displaystyle{ \Pr[P(G_1)]\le \Pr[P(G_2)] }[/math].
- Let [math]\displaystyle{ P }[/math] be a monotone graph property. Suppose [math]\displaystyle{ G_1=G(n,p_1) }[/math], [math]\displaystyle{ G_2=G(n,p_2) }[/math], and [math]\displaystyle{ 0\le p_1\le p_2\le 1 }[/math]. Then
Threshold phenomenon
Theorem - The threshold for a random graph [math]\displaystyle{ G(n,p) }[/math] to contain a 4-clique is [math]\displaystyle{ p=n^{2/3} }[/math].
Definition - The density of a graph [math]\displaystyle{ G(V,E) }[/math], denoted [math]\displaystyle{ \rho(G)\, }[/math], is defined as [math]\displaystyle{ \rho(G)=\frac{|E|}{|V|} }[/math].
- A graph [math]\displaystyle{ G(V,E) }[/math] is balanced if [math]\displaystyle{ \rho(H)\le \rho(G) }[/math] for all subgraphs [math]\displaystyle{ H }[/math] of [math]\displaystyle{ G }[/math].
Theorem (Erdős–Rényi 1960) - Let [math]\displaystyle{ H }[/math] be a balanced graph with [math]\displaystyle{ k }[/math] vertices and [math]\displaystyle{ \ell }[/math] edges. The threshold for the property that a random graph [math]\displaystyle{ G(n,p) }[/math] contains a (not necessarily induced) subgraph isomorphic to [math]\displaystyle{ H }[/math] is [math]\displaystyle{ p=n^{-k/\ell}\, }[/math].
Concentration
Definition - The clique number of a graph [math]\displaystyle{ G(V,E) }[/math], denoted [math]\displaystyle{ \omega(G) }[/math], is the size of the largest clique in [math]\displaystyle{ G }[/math]. Formally,
- [math]\displaystyle{ \omega(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\in E\} }[/math].
- The clique number of a graph [math]\displaystyle{ G(V,E) }[/math], denoted [math]\displaystyle{ \omega(G) }[/math], is the size of the largest clique in [math]\displaystyle{ G }[/math]. Formally,
Theorem (Bollobás-Erdős 1976; Matula 1976) - Let [math]\displaystyle{ G=G(n,\frac{1}{2}) }[/math]. There exists [math]\displaystyle{ k=k(n) }[/math] so that
- [math]\displaystyle{ \Pr[\omega(G)=k\mbox{ or }k+1]\rightarrow 1 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math].
- Let [math]\displaystyle{ G=G(n,\frac{1}{2}) }[/math]. There exists [math]\displaystyle{ k=k(n) }[/math] so that
Theorem (Bollobás 1988) - Let [math]\displaystyle{ G=G(n,\frac{1}{2}) }[/math]. Almost always
- [math]\displaystyle{ \chi(G)\sim\frac{n}{2\log_2 n} }[/math].
- Let [math]\displaystyle{ G=G(n,\frac{1}{2}) }[/math]. Almost always