Combinatorics (Fall 2010)/Random graphs: Difference between revisions
imported>WikiSysop |
imported>WikiSysop |
||
(65 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Tail Inequalities == | |||
=== Markov's inequality === | |||
One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation. | |||
{{Theorem | |||
|Theorem (Markov's Inequality)| | |||
:Let <math>X</math> be a random variable assuming only nonnegative values. Then, for all <math>t>0</math>, | |||
::<math>\begin{align} | |||
\Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}. | |||
\end{align}</math> | |||
}} | |||
{{Proof| Let <math>Y</math> be the indicator such that | |||
:<math>\begin{align} | |||
Y &= | |||
\begin{cases} | |||
1 & \mbox{if }X\ge t,\\ | |||
0 & \mbox{otherwise.} | |||
\end{cases} | |||
\end{align}</math> | |||
It holds that <math>Y\le\frac{X}{t}</math>. Since <math>Y</math> is 0-1 valued, <math>\mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t]</math>. Therefore, | |||
:<math> | |||
\Pr[X\ge t] | |||
= | |||
\mathbf{E}[Y] | |||
\le | |||
\mathbf{E}\left[\frac{X}{t}\right] | |||
=\frac{\mathbf{E}[X]}{t}. | |||
</math> | |||
}} | |||
For any random variable <math>X</math>, for an arbitrary non-negative real function <math>h</math>, the <math>h(X)</math> is a non-negative random variable. Applying Markov's inequality, we directly have that | |||
:<math> | |||
\Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}. | |||
</math> | |||
This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function <math>h</math> which extracts more information about the random variable, we can prove sharper tail inequalities. | |||
=== Variance === | |||
{{Theorem | |||
|Definition (variance)| | |||
:The '''variance''' of a random variable <math>X</math> is defined as | |||
::<math>\begin{align} | |||
\mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2. | |||
\end{align}</math> | |||
:The '''standard deviation''' of random variable <math>X</math> is | |||
::<math> | |||
\delta[X]=\sqrt{\mathbf{Var}[X]}. | |||
</math> | |||
}} | |||
We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance. | |||
{{Theorem | |||
|Definition (covariance)| | |||
:The '''covariance''' of two random variables <math>X</math> and <math>Y</math> is | |||
::<math>\begin{align} | |||
\mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]=\mathbf{E}[XY]-\mathbf{E}[X]\mathbf{E}[Y]. | |||
\end{align}</math> | |||
}} | |||
We have the following theorem for the variance of sum. | |||
{{Theorem | |||
|Theorem| | |||
:For any two random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y). | |||
\end{align}</math> | |||
:Generally, for any random variables <math>X_1,X_2,\ldots,X_n</math>, | |||
::<math>\begin{align} | |||
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j). | |||
\end{align}</math> | |||
}} | |||
{{Proof| The equation for two variables is directly due to the definition of variance and covariance. The equation for <math>n</math> variables can be deduced from the equation for two variables. | |||
}} | |||
We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity. | |||
{{Theorem | |||
|Theorem| | |||
:For any two independent random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y]. | |||
\end{align}</math> | |||
}} | |||
{{Proof| | |||
:<math> | |||
\begin{align} | |||
\mathbf{E}[X\cdot Y] | |||
&= | |||
\sum_{x,y}xy\Pr[X=x\wedge Y=y]\\ | |||
&= | |||
\sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\ | |||
&= | |||
\sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\ | |||
&= | |||
\mathbf{E}[X]\cdot\mathbf{E}[Y]. | |||
\end{align} | |||
</math> | |||
}} | |||
With the above theorem, we can show that the covariance of two independent variables is always zero. | |||
{{Theorem | |||
|Theorem| | |||
:For any two independent random variables <math>X</math> and <math>Y</math>, | |||
::<math>\begin{align} | |||
\mathbf{Cov}(X,Y)=0. | |||
\end{align}</math> | |||
}} | |||
{{Proof| | |||
:<math>\begin{align} | |||
\mathbf{Cov}(X,Y) | |||
&=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\ | |||
&= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\ | |||
&=0. | |||
\end{align}</math> | |||
}} | |||
=== Chebyshev's inequality === | |||
With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality. | |||
{{Theorem | |||
|Theorem (Chebyshev's Inequality)| | |||
:For any <math>t>0</math>, | |||
::<math>\begin{align} | |||
\Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}. | |||
\end{align}</math> | |||
}} | |||
{{Proof| Observe that | |||
:<math>\Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2].</math> | |||
Since <math>(X-\mathbf{E}[X])^2</math> is a nonnegative random variable, we can apply Markov's inequality, such that | |||
:<math> | |||
\Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le | |||
\frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2} | |||
=\frac{\mathbf{Var}[X]}{t^2}. | |||
</math> | |||
}} | |||
== Erdős–Rényi Random Graphs == | == Erdős–Rényi Random Graphs == | ||
Consider a graph <math>G(V,E)</math> which is randomly generated as: | |||
* <math>|V|=n</math>; | |||
* <math>\forall \{u,v\}\in{V\choose 2}</math>, <math>uv\in E</math> independently with probability <math>p</math>. | |||
Such graph is denoted as '''<math>G(n,p)</math>'''. This is called the '''Erdős–Rényi model''' or '''<math>G(n,p)</math> model''' for random graphs. | |||
Informally, the presence of every edge of <math>G(n,p)</math> is determined by an independent coin flipping (with probability of HEADs <math>p</math>). | |||
=== Coloring large-girth graphs === | |||
The girth of a graph is the length of the shortest cycle of the graph. | |||
{{Theorem|Definition| | {{Theorem|Definition| | ||
Let <math>G(V,E)</math> be an undirected graph. | Let <math>G(V,E)</math> be an undirected graph. | ||
* A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>. | * A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>. | ||
* The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>. | * The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>. | ||
}} | |||
The chromatic number of a graph is the minimum number of colors with which the graph can be ''properly'' colored. | |||
{{Theorem|Definition (chromatic number)| | |||
* The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally, | * The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally, | ||
::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>. | ::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>. | ||
}} | |||
In 1959, Erdős proved the following theorem: for any fixed <math>k</math> and <math>\ell</math>, there exists a finite graph with girth at least <math>k</math> and chromatic number at least <math>\ell</math>. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist. | |||
{{Theorem| Theorem (Erdős 1959)| | |||
: For all <math>k,\ell</math> there exists a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k\,</math>. | |||
}} | |||
It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set. | |||
{{Theorem|Definition (independence number)| | |||
* The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally, | * The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally, | ||
::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>. | ::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>. | ||
}} | }} | ||
{{Theorem| | We observe the following relationship between the chromatic number and the independence number. | ||
: For | {{Theorem|Lemma| | ||
:For any <math>n</math>-vertex graph, | |||
::<math>\chi(G)\ge\frac{n}{\alpha(G)}</math>. | |||
}} | |||
{{Proof| | |||
*In the optimal coloring, <math>n</math> vertices are partitioned into <math>\chi(G)</math> color classes according to the vertex color. | |||
*Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color. | |||
*By the pigeonhole principle, there is a color class (hence an independent set) of size <math>\frac{n}{\chi(G)}</math>. Therefore, <math>\alpha(G)\ge\frac{n}{\chi(G)}</math>. | |||
The lemma follows. | |||
}} | |||
Therefore, it is sufficient to prove that <math>\alpha(G)\le\frac{n}{k}</math> and <math>g(G)>\ell</math>. | |||
{{Prooftitle|Proof of Erdős theorem| | |||
Fix <math>\theta<\frac{1}{\ell}</math>. Let <math>G</math> be <math>G(n,p)</math> with <math>p=n^{\theta-1}</math>. | |||
For any length-<math>i</math> simple cycle <math>\sigma</math>, let <math>X_\sigma</math> be the indicator random variable such that | |||
:<math> | |||
X_\sigma= | |||
\begin{cases} | |||
1 & \sigma\mbox{ is a cycle in }G,\\ | |||
0 & \mbox{otherwise}. | |||
\end{cases} | |||
</math> | |||
The number of cycles of length at most <math>\ell</math> in graph <math>G</math> is | |||
:<math>X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma</math>. | |||
For any particular length-<math>i</math> simple cycle <math>\sigma</math>, | |||
:<math>\mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i}</math>. | |||
For any <math>3\le i\le n</math>, the number of length-<math>i</math> simple cycle is <math>\frac{n(n-1)\cdots (n-i+1)}{2i!}</math>. By the linearity of expectation, | |||
:<math>\mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n)</math>. | |||
Applying Markov's inequality, | |||
:<math> | |||
\Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1). | |||
</math> | |||
Therefore, with high probability the random graph has less than <math>n/2</math> short cycles. | |||
Now we proceed to analyze the independence number. Let <math>m=\left\lceil\frac{3\ln n}{p}\right\rceil</math>, so that | |||
:<math> | |||
\begin{align} | |||
\Pr[\alpha(G)\ge m] | |||
&\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\ | |||
&\le{n\choose m}(1-p)^{m\choose 2}\\ | |||
&<n^m\mathrm{e}^{-p{m\choose 2}}\\ | |||
&=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1) | |||
\end{align} | |||
</math> | |||
The probability that either of the above events occurs is | |||
:<math> | |||
\begin{align} | |||
\Pr\left[X<\frac{n}{2}\vee \alpha(G)<m\right] | |||
\le \Pr\left[X<\frac{n}{2}\right]+\Pr\left[\alpha(G)<m\right] | |||
=o(1). | |||
\end{align} | |||
</math> | |||
Therefore, there exists a graph <math>G</math> with less than <math>n/2</math> "short" cycles, i.e., cycles of length at most <math>\ell</math>, and with <math>\alpha(G)<m\le 3n^{1-\theta}\ln n</math>. | |||
Take each "short" cycle in <math>G</math> and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph <math>G'</math> which has no short cycles, hence the girth <math>g(G')\ge\ell</math>. And <math>G'</math> has at least <math>n/2</math> vertices, because at most <math>n/2</math> vertices are removed. | |||
Notice that removing vertices cannot makes <math>\alpha(G)</math> grow. It holds that <math>\alpha(G')\le\alpha(G)</math>. Thus | |||
:<math>\chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n}</math>. | |||
The theorem is proved by taking <math>n</math> sufficiently large so that this value is greater than <math>k</math>. | |||
}} | }} | ||
= | The proof contains a very simple procedure which for any <math>k</math> and <math>\ell</math> ''generates'' such a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k</math>. The procedure is as such: | ||
* Fix some <math>\theta<\frac{1}{\ell}</math>. Choose sufficiently large <math>n</math> with <math>\frac{n^\theta}{6\ln n}>k</math>, and let <math>p=n^{\theta-1}</math>. | |||
* Generate a random graph <math>G</math> as <math>G(n,p)</math>. | |||
* For each cycle of length at most <math>\ell</math> in <math>G</math>, remove a vertex from the cycle. | |||
The resulting graph <math>G'</math> satisfying that <math>g(G)>\ell</math> and <math>\chi(G)>k</math> with high probability. | |||
===Monotone properties === | ===Monotone properties === | ||
A graph property is a predicate of graph which depends only on the structure of the graph. | |||
{{Theorem|Definition| | |||
:Let <math>\mathcal{G}_n=2^{V\choose 2}</math>, where <math>|V|=n</math>, be the set of all possible graphs on <math>n</math> vertices. A '''graph property''' is a boolean function <math>P:\mathcal{G}_n\rightarrow\{0,1\}</math> which is invariant under permutation of vertices, i.e. <math>P(G)=P(H)</math> whenever <math>G</math> is isomorphic to <math>H</math>. | |||
}} | |||
We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property. | |||
{{Theorem|Definition| | |||
:A graph property <math>P</math> is '''monotone''' if for any <math>G\subseteq H</math>, both on <math>n</math> vertices, <math>G</math> having property <math>P</math> implies <math>H</math> having property <math>P</math>. | |||
}} | |||
By seeing the property as a function mapping a set of edges to a numerical value in <math>\{0,1\}</math>, a monotone property is just a monotonically increasing set function. | |||
Some examples of monotone graph properties: | |||
* Hamiltonian; | |||
* <math>k</math>-clique; | |||
* contains a subgraph isomorphic to some <math>H</math>; | |||
* non-planar; | |||
* chromatic number <math>>k</math> (i.e., not <math>k</math>-colorable); | |||
* girth <math><\ell</math>. | |||
From the last two properties, you can see another reason that the Erdős theorem is unintuitive. | |||
Some examples of '''non-'''monotone graph properties: | |||
* Eulerian; | |||
* contains an ''induced'' subgraph isomorphic to some <math>H</math>; | |||
For all monotone graph properties, we have the following theorem. | |||
{{Theorem|Theorem| | |||
:Let <math>P</math> be a monotone graph property. Suppose <math>G_1=G(n,p_1)</math>, <math>G_2=G(n,p_2)</math>, and <math>0\le p_1\le p_2\le 1</math>. Then | |||
::<math>\Pr[P(G_1)]\le \Pr[P(G_2)]</math>. | |||
}} | |||
Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of [http://en.wikipedia.org/wiki/Coupling_(probability) coupling], a proof technique in probability theory which compare two unrelated random variables by forcing them to be related. | |||
{{Proof| | |||
For any <math>\{u,v\}\in{[n]\choose 2}</math>, let <math>X_{\{u,v\}}</math> be independently and uniformly distributed over the continuous interval <math>[0,1]</math>. Let <math>uv\in G_1</math> if and only if <math>X_{\{u,v\}}\in[0,p_1]</math> and let <math>uv\in G_2</math> if and only if <math>X_{\{u,v\}}\in[0,p_2]</math>. | |||
It is obvious that <math>G_1\sim G(n,p_1)\,</math> and <math>G_2\sim G(n,p_2)\,</math>. For any <math>\{u,v\}</math>, <math>uv\in G_1</math> means that <math>X_{\{u,v\}}\in[0,p_1]\subseteq [0,p_2]</math>, which implies that <math>uv\in G_2</math>. Thus, <math>G_1\subseteq G_2</math>. | |||
Since <math>P</math> is monotone, <math>P(G_1)=1</math> implies <math>P(G_2)</math>. Thus, | |||
:<math>\Pr[P(G_1)=1]\le \Pr[P(G_2)=1]</math>. | |||
}} | |||
=== Threshold phenomenon === | === Threshold phenomenon === | ||
One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph <math>G(n,p)</math> suddenly changes from almost always not having the property to almost always having the property as <math>p</math> grows in a very small range. | |||
A monotone graph property <math>P</math> is said to have the '''threshold''' <math>p(n)</math> if | |||
* when <math>p\ll p(n)</math>, <math>\Pr[P(G(n,p))]=0</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always does not have <math>P</math>); and | |||
* when <math>p\gg p(n)</math>, <math>\Pr[P(G(n,p))]=1</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always has <math>P</math>). | |||
The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality). | |||
{{Theorem|Theorem| | |||
:The threshold for a random graph <math>G(n,p)</math> to contain a 4-clique is <math>p=n^{2/3}</math>. | |||
}} | |||
We formulate the problem as such. | |||
For any <math>4</math>-subset of vertices <math>S\in{V\choose 4}</math>, let <math>X_S</math> be the indicator random variable such that | |||
:<math> | |||
X_S= | |||
\begin{cases} | |||
1 & S\mbox{ is a clique},\\ | |||
0 & \mbox{otherwise}. | |||
\end{cases} | |||
</math> | |||
Let <math>X=\sum_{S\in{V\choose 4}}X_S</math> be the total number of 4-cliques in <math>G</math>. | |||
It is sufficient to prove the following lemma. | |||
{{Theorem|Lemma| | |||
*If <math>p=o(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 0</math> as <math>n\rightarrow\infty</math>. | |||
*If <math>p=\omega(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 1</math> as <math>n\rightarrow\infty</math>. | |||
}} | |||
{{Proof| | |||
The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality). | |||
Every 4-clique has 6 edges, thus for any <math>S\in{V\choose 4}</math>, | |||
:<math>\mathbf{E}[X_S]=\Pr[X_S=1]=p^6</math>. | |||
By the linearity of expectation, | |||
:<math>\mathbf{E}[X]=\sum_{S\in{V\choose 4}}\mathbf{E}[X_S]={n\choose 4}p^6</math>. | |||
Applying Markov's inequality | |||
:<math>\Pr[X\ge 1]\le \mathbf{E}[X]=O(n^4p^6)=o(1)</math>, if <math>p=o(n^{-2/3})</math>. | |||
The first claim is proved. | |||
To prove the second claim, it is equivalent to show that <math>\Pr[X=0]=o(1)</math> if <math>p=\omega(n^{-2/3})</math>. By the Chebyshev's inequality, | |||
:<math>\Pr[X=0]\le\Pr[|X-\mathbf{E}[X]|\ge\mathbf{E}[X]]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}</math>, | |||
where the variance is computed as | |||
:<math>\mathbf{Var}[X]=\mathbf{Var}\left[\sum_{S\in{V\choose 4}}X_S\right]=\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]+\sum_{S,T\in{V\choose 4}, S\neq T}\mathbf{Cov}(X_S,X_T)</math>. | |||
For any <math>S\in{V\choose 4}</math>, | |||
:<math>\mathbf{Var}[X_S]=\mathbf{E}[X_S^2]-\mathbf{E}[X_S]^2\le \mathbf{E}[X_S^2]=\mathbf{E}[X_S]=p^6</math>. Thus the first term of above formula is <math>\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]=O(n^4p^6)</math>. | |||
We now compute the covariances. For any <math>S,T\in{V\choose 4}</math> that <math>S\neq T</math>: | |||
* Case.1: <math>|S\cap T|\le 1</math>, so <math>S</math> and <math>T</math> do not share any edges. <math>X_S</math> and <math>X_T</math> are independent, thus <math>\mathbf{Cov}(X_S,X_T)=0</math>. | |||
* Case.2: <math>|S\cap T|= 2</math>, so <math>S</math> and <math>T</math> share an edge. Since <math>|S\cup T|=6</math>, there are <math>{n\choose 6}=O(n^6)</math> pairs of such <math>S</math> and <math>T</math>. | |||
::<math>\mathbf{Cov}(X_S,X_T)=\mathbf{E}[X_SX_T]-\mathbf{E}[X_S]\mathbf{E}[X_T]\le\mathbf{E}[X_SX_T]=\Pr[X_S=1\wedge X_T=1]=p^{11}</math> | |||
:since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is <math>O(n^6p^{11})</math>. | |||
* Case.2: <math>|S\cap T|= 3</math>, so <math>S</math> and <math>T</math> share a triangle. Since <math>|S\cup T|=5</math>, there are <math>{n\choose 5}=O(n^5)</math> pairs of such <math>S</math> and <math>T</math>. By the same argument, | |||
::<math>\mathbf{Cov}(X_S,X_T)\le\Pr[X_S=1\wedge X_T=1]=p^{9}</math> | |||
:since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is <math>O(n^5p^{9})</math>. | |||
Putting all these together, | |||
:<math>\mathbf{Var}[X]=O(n^4p^6+n^6p^{11}+n^5p^{9}).</math> | |||
And | |||
:<math>\Pr[X=0]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3})</math>, | |||
which is <math>o(1)</math> if <math>p=\omega(n^{-2/3})</math>. The second claim is also proved. | |||
}} | |||
The above theorem can be generalized to any "balanced" subgraphs. | |||
{{Theorem|Definition| | |||
* The '''density''' of a graph <math>G(V,E)</math>, denoted <math>\rho(G)\,</math>, is defined as <math>\rho(G)=\frac{|E|}{|V|}</math>. | |||
* A graph <math>G(V,E)</math> is '''balanced''' if <math>\rho(H)\le \rho(G)</math> for all subgraphs <math>H</math> of <math>G</math>. | |||
}} | |||
Cliques are balanced, because <math>\frac{{k\choose 2}}{k}\le \frac{{n\choose 2}}{n}</math> for any <math>k\le n</math>. The threshold for 4-clique is a direct corollary of the following general theorem. | |||
{{Theorem|Theorem (Erdős–Rényi 1960)| | |||
:Let <math>H</math> be a balanced graph with <math>k</math> vertices and <math>\ell</math> edges. The threshold for the property that a random graph <math>G(n,p)</math> contains a (not necessarily induced) subgraph isomorphic to <math>H</math> is <math>p=n^{-k/\ell}\,</math>. | |||
}} | |||
{{Prooftitle|Sketch of proof.| | |||
For any <math>S\in{V\choose k}</math>, let <math>X_S</math> indicate whether <math>G_S</math> (the subgraph of <math>G</math> induced by <math>S</math>) contain a subgraph <math>H</math>. Then | |||
:<math>p^{\ell}\le\mathbf{E}[X_S]\le k!p^{\ell}</math>, since there are at most <math>k!</math> ways to match the substructure. | |||
Note that <math>k</math> does not depend on <math>n</math>. Thus, <math>\mathbf{E}[X_S]=\Theta(p^{\ell})</math>. Let <math>X=\sum_{S\in{V\choose k}}X_S</math> be the number of <math>H</math>-subgraphs. | |||
:<math>\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>. | |||
By Markov's inequality, <math>\Pr[X\ge 1]\le \mathbf{E}[X]=\Theta(n^kp^{\ell})</math> which is <math>o(1)</math> when <math>p\ll n^{-\ell/k}</math>. | |||
By Chebyshev's inequality, <math>\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}</math> where | |||
:<math>\mathbf{Var}[X]=\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]+\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)</math>. | |||
The first term <math>\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]\le \sum_{S\in{V\choose k}}\mathbf{E}[X_S^2]= \sum_{S\in{V\choose k}}\mathbf{E}[X_S]=\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>. | |||
For the covariances, <math>\mathbf{Cov}(X_S,X_T)\neq 0</math> only if <math>|S\cap T|=i</math> for <math>2\le i\le k-1</math>. Note that <math>|S\cap T|=i</math> implies that <math>|S\cup T|=2k-i</math>. And for balanced <math>H</math>, the number of edges of interest in <math>S</math> and <math>T</math> is <math>2\ell-i\rho(H_{S\cap T})\ge 2\ell-i\rho(H)=2\ell-i\ell/k</math>. Thus, <math>\mathbf{Cov}(X_S,X_T)\le\mathbf{E}[X_SX_T]\le p^{2\ell-i\ell/k}</math>. And, | |||
:<math>\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)=\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})</math> | |||
Therefore, when <math>p\gg n^{-\ell/k}</math>, | |||
:<math> | |||
\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}\le \frac{\Theta(n^kp^{\ell})+\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})}{\Theta(n^{2k}p^{2\ell})}=\Theta(n^{-k}p^{-\ell})+\sum_{i=2}^{k-1}O(n^{-i}p^{-i\ell/k})=o(1)</math>. | |||
}} | |||
== Small World Networks== | |||
Read the introduction of Kleinberg's paper (listed in the references). | |||
== | == References == | ||
:('''声明:''' 资料受版权保护, 仅用于教学.) | |||
:('''Disclaimer:''' The following copyrighted materials are meant for educational uses only.) | |||
* Diestel. ''Graph Theory, 2nd Edition''. Springer-Verlag 2000. [[media:Diestel2ed_chap11.pdf|Chapter 11]]. | |||
* Alon and Spencer. ''The Probabilistic Method, 3rd Edition.'' Wiley, 2008. [[media:TPM_girth_chromatic.pdf|"The Probabilistic Lens: High Girth and High Chromatic Number"]], and [[media:TPM_chap4.pdf|Chapter 4]]. | |||
*J. Kleinberg. The small-world phenomenon: An algorithmic perspective. ''Proc. 32nd ACM Symposium on Theory of Computing'' (STOC), 2000. |
Latest revision as of 08:30, 12 October 2010
Tail Inequalities
Markov's inequality
One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation.
Theorem (Markov's Inequality) - Let [math]\displaystyle{ X }[/math] be a random variable assuming only nonnegative values. Then, for all [math]\displaystyle{ t\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}. \end{align} }[/math]
- Let [math]\displaystyle{ X }[/math] be a random variable assuming only nonnegative values. Then, for all [math]\displaystyle{ t\gt 0 }[/math],
Proof. Let [math]\displaystyle{ Y }[/math] be the indicator such that - [math]\displaystyle{ \begin{align} Y &= \begin{cases} 1 & \mbox{if }X\ge t,\\ 0 & \mbox{otherwise.} \end{cases} \end{align} }[/math]
It holds that [math]\displaystyle{ Y\le\frac{X}{t} }[/math]. Since [math]\displaystyle{ Y }[/math] is 0-1 valued, [math]\displaystyle{ \mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t] }[/math]. Therefore,
- [math]\displaystyle{ \Pr[X\ge t] = \mathbf{E}[Y] \le \mathbf{E}\left[\frac{X}{t}\right] =\frac{\mathbf{E}[X]}{t}. }[/math]
- [math]\displaystyle{ \square }[/math]
For any random variable [math]\displaystyle{ X }[/math], for an arbitrary non-negative real function [math]\displaystyle{ h }[/math], the [math]\displaystyle{ h(X) }[/math] is a non-negative random variable. Applying Markov's inequality, we directly have that
- [math]\displaystyle{ \Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}. }[/math]
This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function [math]\displaystyle{ h }[/math] which extracts more information about the random variable, we can prove sharper tail inequalities.
Variance
Definition (variance) - The variance of a random variable [math]\displaystyle{ X }[/math] is defined as
- [math]\displaystyle{ \begin{align} \mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2. \end{align} }[/math]
- The standard deviation of random variable [math]\displaystyle{ X }[/math] is
- [math]\displaystyle{ \delta[X]=\sqrt{\mathbf{Var}[X]}. }[/math]
- The variance of a random variable [math]\displaystyle{ X }[/math] is defined as
We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance.
Definition (covariance) - The covariance of two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is
- [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]=\mathbf{E}[XY]-\mathbf{E}[X]\mathbf{E}[Y]. \end{align} }[/math]
- The covariance of two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is
We have the following theorem for the variance of sum.
Theorem - For any two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y). \end{align} }[/math]
- Generally, for any random variables [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j). \end{align} }[/math]
- For any two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. The equation for two variables is directly due to the definition of variance and covariance. The equation for [math]\displaystyle{ n }[/math] variables can be deduced from the equation for two variables.
- [math]\displaystyle{ \square }[/math]
We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity.
Theorem - For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
- For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. - [math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y] &= \sum_{x,y}xy\Pr[X=x\wedge Y=y]\\ &= \sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\ &= \sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\ &= \mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
With the above theorem, we can show that the covariance of two independent variables is always zero.
Theorem - For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
- [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=0. \end{align} }[/math]
- For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
Proof. - [math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y) &=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\ &= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\ &=0. \end{align} }[/math]
- [math]\displaystyle{ \square }[/math]
Chebyshev's inequality
With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality.
Theorem (Chebyshev's Inequality) - For any [math]\displaystyle{ t\gt 0 }[/math],
- [math]\displaystyle{ \begin{align} \Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}. \end{align} }[/math]
- For any [math]\displaystyle{ t\gt 0 }[/math],
Proof. Observe that - [math]\displaystyle{ \Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2]. }[/math]
Since [math]\displaystyle{ (X-\mathbf{E}[X])^2 }[/math] is a nonnegative random variable, we can apply Markov's inequality, such that
- [math]\displaystyle{ \Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le \frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2} =\frac{\mathbf{Var}[X]}{t^2}. }[/math]
- [math]\displaystyle{ \square }[/math]
Erdős–Rényi Random Graphs
Consider a graph [math]\displaystyle{ G(V,E) }[/math] which is randomly generated as:
- [math]\displaystyle{ |V|=n }[/math];
- [math]\displaystyle{ \forall \{u,v\}\in{V\choose 2} }[/math], [math]\displaystyle{ uv\in E }[/math] independently with probability [math]\displaystyle{ p }[/math].
Such graph is denoted as [math]\displaystyle{ G(n,p) }[/math]. This is called the Erdős–Rényi model or [math]\displaystyle{ G(n,p) }[/math] model for random graphs.
Informally, the presence of every edge of [math]\displaystyle{ G(n,p) }[/math] is determined by an independent coin flipping (with probability of HEADs [math]\displaystyle{ p }[/math]).
Coloring large-girth graphs
The girth of a graph is the length of the shortest cycle of the graph.
Definition Let [math]\displaystyle{ G(V,E) }[/math] be an undirected graph.
- A cycle of length [math]\displaystyle{ k }[/math] in [math]\displaystyle{ G }[/math] is a sequence of distinct vertices [math]\displaystyle{ v_1,v_2,\ldots,v_{k} }[/math] such that [math]\displaystyle{ v_iv_{i+1}\in E }[/math] for all [math]\displaystyle{ i=1,2,\ldots,k-1 }[/math] and [math]\displaystyle{ v_kv_1\in E }[/math].
- The girth of [math]\displaystyle{ G }[/math], dented [math]\displaystyle{ g(G) }[/math], is the length of the shortest cycle in [math]\displaystyle{ G }[/math].
The chromatic number of a graph is the minimum number of colors with which the graph can be properly colored.
Definition (chromatic number) - The chromatic number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \chi(G) }[/math], is the minimal number of colors which we need to color the vertices of [math]\displaystyle{ G }[/math] so that no two adjacent vertices have the same color. Formally,
- [math]\displaystyle{ \chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\} }[/math].
In 1959, Erdős proved the following theorem: for any fixed [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math], there exists a finite graph with girth at least [math]\displaystyle{ k }[/math] and chromatic number at least [math]\displaystyle{ \ell }[/math]. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.
Theorem (Erdős 1959) - For all [math]\displaystyle{ k,\ell }[/math] there exists a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k\, }[/math].
It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.
Definition (independence number) - The independence number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \alpha(G) }[/math], is the size of the largest independent set in [math]\displaystyle{ G }[/math]. Formally,
- [math]\displaystyle{ \alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\} }[/math].
We observe the following relationship between the chromatic number and the independence number.
Lemma - For any [math]\displaystyle{ n }[/math]-vertex graph,
- [math]\displaystyle{ \chi(G)\ge\frac{n}{\alpha(G)} }[/math].
- For any [math]\displaystyle{ n }[/math]-vertex graph,
Proof. - In the optimal coloring, [math]\displaystyle{ n }[/math] vertices are partitioned into [math]\displaystyle{ \chi(G) }[/math] color classes according to the vertex color.
- Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
- By the pigeonhole principle, there is a color class (hence an independent set) of size [math]\displaystyle{ \frac{n}{\chi(G)} }[/math]. Therefore, [math]\displaystyle{ \alpha(G)\ge\frac{n}{\chi(G)} }[/math].
The lemma follows.
- [math]\displaystyle{ \square }[/math]
Therefore, it is sufficient to prove that [math]\displaystyle{ \alpha(G)\le\frac{n}{k} }[/math] and [math]\displaystyle{ g(G)\gt \ell }[/math].
Proof of Erdős theorem Fix [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Let [math]\displaystyle{ G }[/math] be [math]\displaystyle{ G(n,p) }[/math] with [math]\displaystyle{ p=n^{\theta-1} }[/math].
For any length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math], let [math]\displaystyle{ X_\sigma }[/math] be the indicator random variable such that
- [math]\displaystyle{ X_\sigma= \begin{cases} 1 & \sigma\mbox{ is a cycle in }G,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]
The number of cycles of length at most [math]\displaystyle{ \ell }[/math] in graph [math]\displaystyle{ G }[/math] is
- [math]\displaystyle{ X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma }[/math].
For any particular length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math],
- [math]\displaystyle{ \mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i} }[/math].
For any [math]\displaystyle{ 3\le i\le n }[/math], the number of length-[math]\displaystyle{ i }[/math] simple cycle is [math]\displaystyle{ \frac{n(n-1)\cdots (n-i+1)}{2i!} }[/math]. By the linearity of expectation,
- [math]\displaystyle{ \mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n) }[/math].
Applying Markov's inequality,
- [math]\displaystyle{ \Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1). }[/math]
Therefore, with high probability the random graph has less than [math]\displaystyle{ n/2 }[/math] short cycles.
Now we proceed to analyze the independence number. Let [math]\displaystyle{ m=\left\lceil\frac{3\ln n}{p}\right\rceil }[/math], so that
- [math]\displaystyle{ \begin{align} \Pr[\alpha(G)\ge m] &\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\ &\le{n\choose m}(1-p)^{m\choose 2}\\ &\lt n^m\mathrm{e}^{-p{m\choose 2}}\\ &=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1) \end{align} }[/math]
The probability that either of the above events occurs is
- [math]\displaystyle{ \begin{align} \Pr\left[X\lt \frac{n}{2}\vee \alpha(G)\lt m\right] \le \Pr\left[X\lt \frac{n}{2}\right]+\Pr\left[\alpha(G)\lt m\right] =o(1). \end{align} }[/math]
Therefore, there exists a graph [math]\displaystyle{ G }[/math] with less than [math]\displaystyle{ n/2 }[/math] "short" cycles, i.e., cycles of length at most [math]\displaystyle{ \ell }[/math], and with [math]\displaystyle{ \alpha(G)\lt m\le 3n^{1-\theta}\ln n }[/math].
Take each "short" cycle in [math]\displaystyle{ G }[/math] and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph [math]\displaystyle{ G' }[/math] which has no short cycles, hence the girth [math]\displaystyle{ g(G')\ge\ell }[/math]. And [math]\displaystyle{ G' }[/math] has at least [math]\displaystyle{ n/2 }[/math] vertices, because at most [math]\displaystyle{ n/2 }[/math] vertices are removed.
Notice that removing vertices cannot makes [math]\displaystyle{ \alpha(G) }[/math] grow. It holds that [math]\displaystyle{ \alpha(G')\le\alpha(G) }[/math]. Thus
- [math]\displaystyle{ \chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n} }[/math].
The theorem is proved by taking [math]\displaystyle{ n }[/math] sufficiently large so that this value is greater than [math]\displaystyle{ k }[/math].
- [math]\displaystyle{ \square }[/math]
The proof contains a very simple procedure which for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math] generates such a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math]. The procedure is as such:
- Fix some [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Choose sufficiently large [math]\displaystyle{ n }[/math] with [math]\displaystyle{ \frac{n^\theta}{6\ln n}\gt k }[/math], and let [math]\displaystyle{ p=n^{\theta-1} }[/math].
- Generate a random graph [math]\displaystyle{ G }[/math] as [math]\displaystyle{ G(n,p) }[/math].
- For each cycle of length at most [math]\displaystyle{ \ell }[/math] in [math]\displaystyle{ G }[/math], remove a vertex from the cycle.
The resulting graph [math]\displaystyle{ G' }[/math] satisfying that [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math] with high probability.
Monotone properties
A graph property is a predicate of graph which depends only on the structure of the graph.
Definition - Let [math]\displaystyle{ \mathcal{G}_n=2^{V\choose 2} }[/math], where [math]\displaystyle{ |V|=n }[/math], be the set of all possible graphs on [math]\displaystyle{ n }[/math] vertices. A graph property is a boolean function [math]\displaystyle{ P:\mathcal{G}_n\rightarrow\{0,1\} }[/math] which is invariant under permutation of vertices, i.e. [math]\displaystyle{ P(G)=P(H) }[/math] whenever [math]\displaystyle{ G }[/math] is isomorphic to [math]\displaystyle{ H }[/math].
We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.
Definition - A graph property [math]\displaystyle{ P }[/math] is monotone if for any [math]\displaystyle{ G\subseteq H }[/math], both on [math]\displaystyle{ n }[/math] vertices, [math]\displaystyle{ G }[/math] having property [math]\displaystyle{ P }[/math] implies [math]\displaystyle{ H }[/math] having property [math]\displaystyle{ P }[/math].
By seeing the property as a function mapping a set of edges to a numerical value in [math]\displaystyle{ \{0,1\} }[/math], a monotone property is just a monotonically increasing set function.
Some examples of monotone graph properties:
- Hamiltonian;
- [math]\displaystyle{ k }[/math]-clique;
- contains a subgraph isomorphic to some [math]\displaystyle{ H }[/math];
- non-planar;
- chromatic number [math]\displaystyle{ \gt k }[/math] (i.e., not [math]\displaystyle{ k }[/math]-colorable);
- girth [math]\displaystyle{ \lt \ell }[/math].
From the last two properties, you can see another reason that the Erdős theorem is unintuitive.
Some examples of non-monotone graph properties:
- Eulerian;
- contains an induced subgraph isomorphic to some [math]\displaystyle{ H }[/math];
For all monotone graph properties, we have the following theorem.
Theorem - Let [math]\displaystyle{ P }[/math] be a monotone graph property. Suppose [math]\displaystyle{ G_1=G(n,p_1) }[/math], [math]\displaystyle{ G_2=G(n,p_2) }[/math], and [math]\displaystyle{ 0\le p_1\le p_2\le 1 }[/math]. Then
- [math]\displaystyle{ \Pr[P(G_1)]\le \Pr[P(G_2)] }[/math].
- Let [math]\displaystyle{ P }[/math] be a monotone graph property. Suppose [math]\displaystyle{ G_1=G(n,p_1) }[/math], [math]\displaystyle{ G_2=G(n,p_2) }[/math], and [math]\displaystyle{ 0\le p_1\le p_2\le 1 }[/math]. Then
Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of coupling, a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.
Proof. For any [math]\displaystyle{ \{u,v\}\in{[n]\choose 2} }[/math], let [math]\displaystyle{ X_{\{u,v\}} }[/math] be independently and uniformly distributed over the continuous interval [math]\displaystyle{ [0,1] }[/math]. Let [math]\displaystyle{ uv\in G_1 }[/math] if and only if [math]\displaystyle{ X_{\{u,v\}}\in[0,p_1] }[/math] and let [math]\displaystyle{ uv\in G_2 }[/math] if and only if [math]\displaystyle{ X_{\{u,v\}}\in[0,p_2] }[/math].
It is obvious that [math]\displaystyle{ G_1\sim G(n,p_1)\, }[/math] and [math]\displaystyle{ G_2\sim G(n,p_2)\, }[/math]. For any [math]\displaystyle{ \{u,v\} }[/math], [math]\displaystyle{ uv\in G_1 }[/math] means that [math]\displaystyle{ X_{\{u,v\}}\in[0,p_1]\subseteq [0,p_2] }[/math], which implies that [math]\displaystyle{ uv\in G_2 }[/math]. Thus, [math]\displaystyle{ G_1\subseteq G_2 }[/math].
Since [math]\displaystyle{ P }[/math] is monotone, [math]\displaystyle{ P(G_1)=1 }[/math] implies [math]\displaystyle{ P(G_2) }[/math]. Thus,
- [math]\displaystyle{ \Pr[P(G_1)=1]\le \Pr[P(G_2)=1] }[/math].
- [math]\displaystyle{ \square }[/math]
Threshold phenomenon
One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph [math]\displaystyle{ G(n,p) }[/math] suddenly changes from almost always not having the property to almost always having the property as [math]\displaystyle{ p }[/math] grows in a very small range.
A monotone graph property [math]\displaystyle{ P }[/math] is said to have the threshold [math]\displaystyle{ p(n) }[/math] if
- when [math]\displaystyle{ p\ll p(n) }[/math], [math]\displaystyle{ \Pr[P(G(n,p))]=0 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math] (also called [math]\displaystyle{ G(n,p) }[/math] almost always does not have [math]\displaystyle{ P }[/math]); and
- when [math]\displaystyle{ p\gg p(n) }[/math], [math]\displaystyle{ \Pr[P(G(n,p))]=1 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math] (also called [math]\displaystyle{ G(n,p) }[/math] almost always has [math]\displaystyle{ P }[/math]).
The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).
Theorem - The threshold for a random graph [math]\displaystyle{ G(n,p) }[/math] to contain a 4-clique is [math]\displaystyle{ p=n^{2/3} }[/math].
We formulate the problem as such. For any [math]\displaystyle{ 4 }[/math]-subset of vertices [math]\displaystyle{ S\in{V\choose 4} }[/math], let [math]\displaystyle{ X_S }[/math] be the indicator random variable such that
- [math]\displaystyle{ X_S= \begin{cases} 1 & S\mbox{ is a clique},\\ 0 & \mbox{otherwise}. \end{cases} }[/math]
Let [math]\displaystyle{ X=\sum_{S\in{V\choose 4}}X_S }[/math] be the total number of 4-cliques in [math]\displaystyle{ G }[/math].
It is sufficient to prove the following lemma.
Lemma - If [math]\displaystyle{ p=o(n^{-2/3}) }[/math], then [math]\displaystyle{ \Pr[X\ge 1]\rightarrow 0 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math].
- If [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math], then [math]\displaystyle{ \Pr[X\ge 1]\rightarrow 1 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math].
Proof. The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality).
Every 4-clique has 6 edges, thus for any [math]\displaystyle{ S\in{V\choose 4} }[/math],
- [math]\displaystyle{ \mathbf{E}[X_S]=\Pr[X_S=1]=p^6 }[/math].
By the linearity of expectation,
- [math]\displaystyle{ \mathbf{E}[X]=\sum_{S\in{V\choose 4}}\mathbf{E}[X_S]={n\choose 4}p^6 }[/math].
Applying Markov's inequality
- [math]\displaystyle{ \Pr[X\ge 1]\le \mathbf{E}[X]=O(n^4p^6)=o(1) }[/math], if [math]\displaystyle{ p=o(n^{-2/3}) }[/math].
The first claim is proved.
To prove the second claim, it is equivalent to show that [math]\displaystyle{ \Pr[X=0]=o(1) }[/math] if [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math]. By the Chebyshev's inequality,
- [math]\displaystyle{ \Pr[X=0]\le\Pr[|X-\mathbf{E}[X]|\ge\mathbf{E}[X]]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2} }[/math],
where the variance is computed as
- [math]\displaystyle{ \mathbf{Var}[X]=\mathbf{Var}\left[\sum_{S\in{V\choose 4}}X_S\right]=\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]+\sum_{S,T\in{V\choose 4}, S\neq T}\mathbf{Cov}(X_S,X_T) }[/math].
For any [math]\displaystyle{ S\in{V\choose 4} }[/math],
- [math]\displaystyle{ \mathbf{Var}[X_S]=\mathbf{E}[X_S^2]-\mathbf{E}[X_S]^2\le \mathbf{E}[X_S^2]=\mathbf{E}[X_S]=p^6 }[/math]. Thus the first term of above formula is [math]\displaystyle{ \sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]=O(n^4p^6) }[/math].
We now compute the covariances. For any [math]\displaystyle{ S,T\in{V\choose 4} }[/math] that [math]\displaystyle{ S\neq T }[/math]:
- Case.1: [math]\displaystyle{ |S\cap T|\le 1 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] do not share any edges. [math]\displaystyle{ X_S }[/math] and [math]\displaystyle{ X_T }[/math] are independent, thus [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)=0 }[/math].
- Case.2: [math]\displaystyle{ |S\cap T|= 2 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] share an edge. Since [math]\displaystyle{ |S\cup T|=6 }[/math], there are [math]\displaystyle{ {n\choose 6}=O(n^6) }[/math] pairs of such [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math].
- [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)=\mathbf{E}[X_SX_T]-\mathbf{E}[X_S]\mathbf{E}[X_T]\le\mathbf{E}[X_SX_T]=\Pr[X_S=1\wedge X_T=1]=p^{11} }[/math]
- since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is [math]\displaystyle{ O(n^6p^{11}) }[/math].
- Case.2: [math]\displaystyle{ |S\cap T|= 3 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] share a triangle. Since [math]\displaystyle{ |S\cup T|=5 }[/math], there are [math]\displaystyle{ {n\choose 5}=O(n^5) }[/math] pairs of such [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math]. By the same argument,
- [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\le\Pr[X_S=1\wedge X_T=1]=p^{9} }[/math]
- since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is [math]\displaystyle{ O(n^5p^{9}) }[/math].
Putting all these together,
- [math]\displaystyle{ \mathbf{Var}[X]=O(n^4p^6+n^6p^{11}+n^5p^{9}). }[/math]
And
- [math]\displaystyle{ \Pr[X=0]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3}) }[/math],
which is [math]\displaystyle{ o(1) }[/math] if [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math]. The second claim is also proved.
- [math]\displaystyle{ \square }[/math]
The above theorem can be generalized to any "balanced" subgraphs.
Definition - The density of a graph [math]\displaystyle{ G(V,E) }[/math], denoted [math]\displaystyle{ \rho(G)\, }[/math], is defined as [math]\displaystyle{ \rho(G)=\frac{|E|}{|V|} }[/math].
- A graph [math]\displaystyle{ G(V,E) }[/math] is balanced if [math]\displaystyle{ \rho(H)\le \rho(G) }[/math] for all subgraphs [math]\displaystyle{ H }[/math] of [math]\displaystyle{ G }[/math].
Cliques are balanced, because [math]\displaystyle{ \frac{{k\choose 2}}{k}\le \frac{{n\choose 2}}{n} }[/math] for any [math]\displaystyle{ k\le n }[/math]. The threshold for 4-clique is a direct corollary of the following general theorem.
Theorem (Erdős–Rényi 1960) - Let [math]\displaystyle{ H }[/math] be a balanced graph with [math]\displaystyle{ k }[/math] vertices and [math]\displaystyle{ \ell }[/math] edges. The threshold for the property that a random graph [math]\displaystyle{ G(n,p) }[/math] contains a (not necessarily induced) subgraph isomorphic to [math]\displaystyle{ H }[/math] is [math]\displaystyle{ p=n^{-k/\ell}\, }[/math].
Sketch of proof. For any [math]\displaystyle{ S\in{V\choose k} }[/math], let [math]\displaystyle{ X_S }[/math] indicate whether [math]\displaystyle{ G_S }[/math] (the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]) contain a subgraph [math]\displaystyle{ H }[/math]. Then
- [math]\displaystyle{ p^{\ell}\le\mathbf{E}[X_S]\le k!p^{\ell} }[/math], since there are at most [math]\displaystyle{ k! }[/math] ways to match the substructure.
Note that [math]\displaystyle{ k }[/math] does not depend on [math]\displaystyle{ n }[/math]. Thus, [math]\displaystyle{ \mathbf{E}[X_S]=\Theta(p^{\ell}) }[/math]. Let [math]\displaystyle{ X=\sum_{S\in{V\choose k}}X_S }[/math] be the number of [math]\displaystyle{ H }[/math]-subgraphs.
- [math]\displaystyle{ \mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math].
By Markov's inequality, [math]\displaystyle{ \Pr[X\ge 1]\le \mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math] which is [math]\displaystyle{ o(1) }[/math] when [math]\displaystyle{ p\ll n^{-\ell/k} }[/math].
By Chebyshev's inequality, [math]\displaystyle{ \Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2} }[/math] where
- [math]\displaystyle{ \mathbf{Var}[X]=\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]+\sum_{S\neq T}\mathbf{Cov}(X_S,X_T) }[/math].
The first term [math]\displaystyle{ \sum_{S\in{V\choose k}}\mathbf{Var}[X_S]\le \sum_{S\in{V\choose k}}\mathbf{E}[X_S^2]= \sum_{S\in{V\choose k}}\mathbf{E}[X_S]=\mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math].
For the covariances, [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\neq 0 }[/math] only if [math]\displaystyle{ |S\cap T|=i }[/math] for [math]\displaystyle{ 2\le i\le k-1 }[/math]. Note that [math]\displaystyle{ |S\cap T|=i }[/math] implies that [math]\displaystyle{ |S\cup T|=2k-i }[/math]. And for balanced [math]\displaystyle{ H }[/math], the number of edges of interest in [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] is [math]\displaystyle{ 2\ell-i\rho(H_{S\cap T})\ge 2\ell-i\rho(H)=2\ell-i\ell/k }[/math]. Thus, [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\le\mathbf{E}[X_SX_T]\le p^{2\ell-i\ell/k} }[/math]. And,
- [math]\displaystyle{ \sum_{S\neq T}\mathbf{Cov}(X_S,X_T)=\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k}) }[/math]
Therefore, when [math]\displaystyle{ p\gg n^{-\ell/k} }[/math],
- [math]\displaystyle{ \Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}\le \frac{\Theta(n^kp^{\ell})+\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})}{\Theta(n^{2k}p^{2\ell})}=\Theta(n^{-k}p^{-\ell})+\sum_{i=2}^{k-1}O(n^{-i}p^{-i\ell/k})=o(1) }[/math].
- [math]\displaystyle{ \square }[/math]
Small World Networks
Read the introduction of Kleinberg's paper (listed in the references).
References
- (声明: 资料受版权保护, 仅用于教学.)
- (Disclaimer: The following copyrighted materials are meant for educational uses only.)
- Diestel. Graph Theory, 2nd Edition. Springer-Verlag 2000. Chapter 11.
- Alon and Spencer. The Probabilistic Method, 3rd Edition. Wiley, 2008. "The Probabilistic Lens: High Girth and High Chromatic Number", and Chapter 4.
- J. Kleinberg. The small-world phenomenon: An algorithmic perspective. Proc. 32nd ACM Symposium on Theory of Computing (STOC), 2000.