Combinatorics (Fall 2010)/Random graphs: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
No edit summary
imported>WikiSysop
 
(58 intermediate revisions by the same user not shown)
Line 1: Line 1:
== The Moment Methods ==
== Tail Inequalities ==


=== Markov's inequality ===
=== Markov's inequality ===
Line 30: Line 30:
}}
}}


;Example (from Las Vegas to Monte Carlo)
Let <math>A</math> be a Las Vegas randomized algorithm for a decision problem <math>f</math>, whose expected running time is within <math>T(n)</math> on any input of size <math>n</math>. We transform <math>A</math> to a Monte Carlo randomized algorithm <math>B</math> with bounded one-sided error as follows:
:<math>B(x)</math>:
:*Run <math>A(x)</math> for <math>2T(n)</math> long where <math>n</math> is the size of <math>x</math>.
:*If <math>A(x)</math> returned within <math>2T(n)</math> time, then return what <math>A(x)</math> just returned, else return 1.
Since <math>A</math> is Las Vegas, its output is always correct, thus <math>B(x)</math> only errs when it returns 1, thus the error is one-sided. The error probability is bounded by the probability that <math>A(x)</math> runs longer than <math>2T(n)</math>. Since the expected running time of <math>A(x)</math> is at most <math>T(n)</math>, due to Markov's inequality,
:<math>
\Pr[\mbox{the running time of }A(x)\ge2T(n)]\le\frac{\mathbf{E}[\mbox{running time of }A(x)]}{2T(n)}\le\frac{1}{2},
</math>
thus the error probability is bounded.
This easy reduction implies that '''ZPP'''<math>\subseteq</math>'''RP'''.
==== Generalization ====
For any random variable <math>X</math>, for an arbitrary non-negative real function <math>h</math>, the <math>h(X)</math> is a non-negative random variable. Applying Markov's inequality, we directly have that
For any random variable <math>X</math>, for an arbitrary non-negative real function <math>h</math>, the <math>h(X)</math> is a non-negative random variable. Applying Markov's inequality, we directly have that
:<math>
:<math>
Line 71: Line 56:
:The '''covariance''' of two random variables <math>X</math> and <math>Y</math> is
:The '''covariance''' of two random variables <math>X</math> and <math>Y</math> is
::<math>\begin{align}
::<math>\begin{align}
\mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right].
\mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]=\mathbf{E}[XY]-\mathbf{E}[X]\mathbf{E}[Y].
\end{align}</math>
\end{align}</math>
}}
}}
Line 133: Line 118:
\end{align}</math>
\end{align}</math>
}}
}}
We then have the following theorem for the variance of the sum of pairwise independent random variables.
{{Theorem
|Theorem|
:For '''pairwise''' independent random variables <math>X_1,X_2,\ldots,X_n</math>,
::<math>\begin{align}
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i].
\end{align}</math>
}}
;Remark
:The theorem holds for '''pairwise''' independent random variables, a much weaker independence requirement than the '''mutual''' independence. This makes the variance-based probability tools work even for weakly random cases. We will see what it exactly means in the future lectures.
==== Variance of binomial distribution ====
For a Bernoulli trial with parameter <math>p</math>.
:<math>
X=\begin{cases}
1& \mbox{with probability }p\\
0& \mbox{with probability }1-p
\end{cases}
</math>
The variance is
:<math>
\mathbf{Var}[X]=\mathbf{E}[X^2]-(\mathbf{E}[X])^2=\mathbf{E}[X]-(\mathbf{E}[X])^2=p-p^2=p(1-p).
</math>
Let <math>Y</math> be a binomial random variable with parameter <math>n</math> and <math>p</math>, i.e. <math>Y=\sum_{i=1}^nY_i</math>, where <math>Y_i</math>'s are i.i.d. Bernoulli trials with parameter <math>p</math>. The variance is
:<math>
\begin{align}
\mathbf{Var}[Y]
&=
\mathbf{Var}\left[\sum_{i=1}^nY_i\right]\\
&=
\sum_{i=1}^n\mathbf{Var}\left[Y_i\right] &\qquad (\mbox{Independence})\\
&=
\sum_{i=1}^np(1-p) &\qquad (\mbox{Bernoulli})\\
&=
p(1-p)n.
\end{align}
</math>


=== Chebyshev's inequality ===
=== Chebyshev's inequality ===
Line 194: Line 138:
}}
}}


=== Higher moments ===
== Erdős–Rényi Random Graphs ==
The above two inequalities can be put into a general framework regarding the [http://en.wikipedia.org/wiki/Moment_(mathematics) '''moments'''] of random variables.
Consider a graph <math>G(V,E)</math> which is randomly generated as:
 
* <math>|V|=n</math>;
{{Theorem
* <math>\forall \{u,v\}\in{V\choose 2}</math>, <math>uv\in E</math> independently with probability <math>p</math>.
|Definition (moments)|
:The <math>k</math>th moment of a random variable <math>X</math> is <math>\mathbf{E}[X^k]</math>.
}}
 
The more we know about the moments, the more information we have about the distribution, hence in principle, we can get tighter tail bounds. This technique is called the <math>k</math>th moment method.
 
We know that the <math>k</math>th moment is <math>\mathbf{E}[X^k]</math>. More generally,
the <math>k</math>th moment about <math>c</math> is <math>\mathbf{E}[(X-c)^k]</math>. The [http://en.wikipedia.org/wiki/Central_moment central moment] of <math>X</math>, denoted <math>\mu_k[X]</math>, is defined as <math>\mu_k[X]=\mathbf{E}[(X-\mathbf{E}[X])^k]</math>. So the variance is just the second central moment <math>\mu_2[X]</math>.
 
The <math>k</math>th moment method is stated by the following theorem.
{{Theorem
|Theorem (the <math>k</math>th moment method)|
:For even <math>k>0</math>, and any <math>t>0</math>,
::<math>\begin{align}
\Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mu_k[X]}{t^k}.
\end{align}</math>
}}
{{Proof| Apply Markov's inequality to <math>(X-\mathbf{E}[X])^k</math>.
}}
 
How about the odd <math>k</math>? For odd <math>k</math>, we should apply Markov's inequality to <math>|X-\mathbf{E}[X]|^k</math>, but estimating expectations of absolute values can be hard.
 


==  Erdős–Rényi Random Graphs ==
Such graph is denoted as '''<math>G(n,p)</math>'''. This is called the '''Erdős–Rényi model''' or '''<math>G(n,p)</math> model''' for random graphs.


=== The probabilistic method ===
Informally, the presence of every edge of <math>G(n,p)</math> is determined by an independent coin flipping (with probability of HEADs <math>p</math>).
==== Coloring large-girth graphs ====


=== Coloring large-girth graphs ===
The girth of a graph is the length of the shortest cycle of the graph.
{{Theorem|Definition|
{{Theorem|Definition|
Let <math>G(V,E)</math> be an undirected graph.
Let <math>G(V,E)</math> be an undirected graph.
* A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>.
* A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>.
* The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>.
* The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>.
}}
The chromatic number of a graph is the minimum number of colors with which the graph can be ''properly'' colored.
{{Theorem|Definition (chromatic number)|
* The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally,
* The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally,
::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>.
::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>.
* The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally,
::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>.
}}
}}
In 1959, Erdős proved the following theorem: for any fixed <math>k</math> and <math>\ell</math>, there exists a finite graph with girth at least <math>k</math> and chromatic number at least <math>\ell</math>. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.


{{Theorem| Theorem (Erdős 1959)|
{{Theorem| Theorem (Erdős 1959)|
Line 240: Line 167:
}}
}}


==== Expander graphs ====
It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.
Consider an undirected (multi)graph <math>G(V,E)</math>, where the parallel edges between two vertices are allowed.


Some notations:
{{Theorem|Definition (independence number)|
* For <math>S,T\subset V</math>, let <math>E(S,T)=\{uv\in E\mid u\in S,v\in T\}</math>.
* The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally,
* The '''Edge Boundary''' of a set <math>S\subset V</math>, denoted <math>\partial S\,</math>, is <math>\partial S = E(S, \bar{S})</math>.  
::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>.
}}


{{Theorem
We observe the following relationship between the chromatic number and the independence number.
|Definition (Graph expansion)|
{{Theorem|Lemma|
:The '''expansion ratio''' of an undirected graph <math>G</math> on <math>n</math> vertices, is defined as
:For any <math>n</math>-vertex graph,
::<math>
::<math>\chi(G)\ge\frac{n}{\alpha(G)}</math>.
\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}} \frac{|\partial S|}{|S|}.
</math>
}}
}}
{{Proof|
*In the optimal coloring, <math>n</math> vertices are partitioned into <math>\chi(G)</math> color classes according to the vertex color.
*Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
*By the pigeonhole principle, there is a color class (hence an independent set) of size <math>\frac{n}{\chi(G)}</math>. Therefore, <math>\alpha(G)\ge\frac{n}{\chi(G)}</math>.


'''Expander graphs''' are '''<math>d</math>-regular''' (multi)graphs with <math>d=O(1)</math> and <math>\phi(G)=\Omega(1)</math>.
The lemma follows.
}}


This definition states the following properties of expander graphs:
Therefore, it is sufficient to prove that <math>\alpha(G)\le\frac{n}{k}</math> and <math>g(G)>\ell</math>.
* Expander graphs are sparse graphs. This is because the number of edges is <math>dn/2=O(n)</math>.
* Despite the sparsity, expander graphs have good connectivity. This is supported by the expansion ratio.
* This one is implicit: expander graph is a ''family of graphs'' <math>\{G_n\}</math>, where <math>n</math> is the number of vertices. The asymptotic order <math>O(1)</math> and <math>\Omega(1)</math> in the definition is relative to the number of vertices <math>n</math>, which grows to infinity.


For a vertex set <math>S</math>, the size of the edge boundary <math>|\partial S|</math> can be seen as the "perimeter" of <math>S</math>, and <math>|S|</math> can be seen as the "volume" of <math>S</math>. The expansion property can be interpreted as a combinatorial version of [http://en.wikipedia.org/wiki/Isoperimetric_inequality isoperimetric inequality].
{{Prooftitle|Proof of Erdős theorem|
Fix <math>\theta<\frac{1}{\ell}</math>. Let <math>G</math> be <math>G(n,p)</math> with <math>p=n^{\theta-1}</math>.  


-----
For any length-<math>i</math> simple cycle <math>\sigma</math>, let <math>X_\sigma</math> be the indicator random variable such that
We will show the existence of expander graphs by the probabilistic method. In order to do so, we need to generate random <math>d</math>-regular graphs.
:<math>
 
X_\sigma=
Suppose that <math>d</math> is even. We can generate a random <math>d</math>-regular graph <math>G(V,E)</math> as follows:
\begin{cases}
* Let <math>V</math> be the vertex set. Uniformly and independently choose <math>\frac{d}{2}</math> cycles of <math>V</math>.
1 & \sigma\mbox{ is a cycle in }G,\\
* For each vertex <math>v</math>, for every cycle, assuming that the two neighbors of <math>v</math> in that cycle is <math>w</math> and <math>u</math>, add two edges <math>wv</math> and <math>uv</math> to <math>E</math>.
0 & \mbox{otherwise}.
\end{cases}
</math>


The resulting <math>G(V,E)</math> is a multigraph. That is, it may have multiple edges between two vertices. We will show that <math>G(V,E)</math> is an expander graph with high probability. Formally, for some constant <math>d</math> and constant <math>\alpha</math>,
The number of cycles of length at most <math>\ell</math> in graph <math>G</math> is  
:<math>\Pr[\phi(G)\ge \alpha]=1-o(1)</math>.
:<math>X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma</math>.


By the probabilistic method, this shows that there exist expander graphs. In fact, the above probability bound shows something much stronger: it shows that almost every regular graph is an expander.
For any particular length-<math>i</math> simple cycle <math>\sigma</math>,
 
:<math>\mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i}</math>.
Recall that <math>\phi(G)=\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|}</math>. We call such <math>S\subset V</math> that <math>\frac{|\partial S|}{|S|}<\alpha</math> a "bad <math>S</math>". Then <math>\phi(G)< \alpha</math> if and only if there exists a bad <math>S</math> of size at most <math>\frac{n}{2}</math>. Therefore,
For any <math>3\le i\le n</math>, the number of length-<math>i</math> simple cycle is <math>\frac{n(n-1)\cdots (n-i+1)}{2i!}</math>. By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n)</math>.
Applying Markov's inequality,
:<math>
:<math>
\begin{align}
\Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1).
\Pr[\phi(G)< \alpha]
&=
\Pr\left[\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|}<\alpha\right]\\
&=
\sum_{k=1}^\frac{n}{2}\Pr[\,\exists \mbox{bad }S\mbox{ of size }k\,]\\
&\le
\sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,]
\end{align}
</math>
</math>
Let <math>R\subset S</math> be the set of vertices in <math>S</math> which has neighbors in <math>\bar{S}</math>, and let <math>r=|R|</math>. It is obvious that <math>|\partial S|\ge r</math>, thus, for a bad <math>S</math>, <math>r<\alpha k</math>. Therefore, there are at most <math>\sum_{r=1}^{\alpha k}{k \choose r}</math> possible choices such <math>R</math>. For any fixed choice of <math>R</math>, the probability that an edge picked by a vertex in <math>S-R</math> connects to a vertex in <math>S</math> is at most <math>k/n</math>, and there are <math>d(k-r)</math> such edges. For any fixed <math>S</math> of size <math>k</math> and <math>R</math> of size <math>r</math>, the probability that all neighbors of all vertices in <math>S-R</math> are in <math>S</math> is at most <math>\left(\frac{k}{n}\right)^{d(k-r)}</math>. Due to the union bound, for any fixed <math>S</math> of size <math>k</math>,
Therefore, with high probability the random graph has less than <math>n/2</math> short cycles.
 
Now we proceed to analyze the independence number. Let <math>m=\left\lceil\frac{3\ln n}{p}\right\rceil</math>, so that
:<math>
:<math>
\begin{align}
\begin{align}
\Pr[\,S\mbox{ is bad}\,]
\Pr[\alpha(G)\ge m]
&\le
&\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\
\sum_{r=1}^{\alpha k}{k \choose r}\left(\frac{k}{n}\right)^{d(k-r)}
&\le{n\choose m}(1-p)^{m\choose 2}\\
\le
&<n^m\mathrm{e}^{-p{m\choose 2}}\\
\alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)}
&=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1)
\end{align}
\end{align}
</math>
</math>
Therefore,
The probability that either of the above events occurs is
:<math>
:<math>
\begin{align}
\begin{align}
\Pr[\phi(G)< \alpha]
\Pr\left[X<\frac{n}{2}\vee \alpha(G)<m\right]
&\le
\le \Pr\left[X<\frac{n}{2}\right]+\Pr\left[\alpha(G)<m\right]
\sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,]\\
=o(1).
&\le
\sum_{k=1}^\frac{n}{2}{n\choose k}\alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)} \\
&\le
\sum_{k=1}^\frac{n}{2}\left(\frac{en}{k}\right)^k\alpha k \left(\frac{ek}{\alpha k}\right)^{\alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)}&\quad (\mbox{Stirling formula }{n\choose k}\le\left(\frac{en}{k}\right)^k)\\
&\le
\sum_{k=1}^\frac{n}{2}\exp(O(k))\left(\frac{k}{n}\right)^{k(d(1-\alpha)-1)}.
\end{align}
\end{align}
</math>
</math>
The last line is <math>o(1)</math> when <math>d\ge\frac{2}{1-\alpha}</math>. Therefore, <math>G</math> is an expander graph with expansion ratio <math>\alpha</math> with high probability for suitable choices of constant <math>d</math> and constant <math>\alpha</math>.
Therefore, there exists a graph <math>G</math> with less than <math>n/2</math> "short" cycles, i.e., cycles of length at most <math>\ell</math>, and with <math>\alpha(G)<m\le 3n^{1-\theta}\ln n</math>.
 
Take each "short" cycle in <math>G</math> and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph <math>G'</math> which has no short cycles, hence the girth <math>g(G')\ge\ell</math>. And <math>G'</math> has at least <math>n/2</math> vertices, because at most <math>n/2</math> vertices are removed.
 
Notice that removing vertices cannot makes <math>\alpha(G)</math> grow. It holds that <math>\alpha(G')\le\alpha(G)</math>. Thus
:<math>\chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n}</math>.
The theorem is proved by taking <math>n</math> sufficiently large so that this value is greater than <math>k</math>.
}}
 
The proof contains a very simple procedure which for any <math>k</math> and <math>\ell</math> ''generates'' such a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k</math>. The procedure is as such:
* Fix some <math>\theta<\frac{1}{\ell}</math>. Choose sufficiently large <math>n</math> with <math>\frac{n^\theta}{6\ln n}>k</math>, and let <math>p=n^{\theta-1}</math>.
* Generate a random graph <math>G</math> as <math>G(n,p)</math>.
* For each cycle of length at most <math>\ell</math> in <math>G</math>, remove a vertex from the cycle.
The resulting graph <math>G'</math> satisfying that <math>g(G)>\ell</math> and <math>\chi(G)>k</math> with high probability.


===Monotone properties ===
===Monotone properties ===
A graph property is a predicate of graph which depends only on the structure of the graph.
{{Theorem|Definition|
{{Theorem|Definition|
:Let <math>\mathcal{G}_n=2^{V\choose 2}</math>, where <math>|V|=n</math>, be the set of all possible graphs on <math>n</math> vertices. A '''graph property''' is a boolean function <math>P:\mathcal{G}_n\rightarrow\{0,1\}</math> which is invariant under permutation of vertices, i.e. <math>P(G)=P(H)</math> whenever <math>G</math> is isomorphic to <math>H</math>.
:Let <math>\mathcal{G}_n=2^{V\choose 2}</math>, where <math>|V|=n</math>, be the set of all possible graphs on <math>n</math> vertices. A '''graph property''' is a boolean function <math>P:\mathcal{G}_n\rightarrow\{0,1\}</math> which is invariant under permutation of vertices, i.e. <math>P(G)=P(H)</math> whenever <math>G</math> is isomorphic to <math>H</math>.
}}
}}


We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.
{{Theorem|Definition|
{{Theorem|Definition|
:A graph property <math>P</math> is '''monotone''' if for any <math>G\subseteq H</math>, both on <math>n</math> vertices, <math>G</math> having property <math>P</math> implies <math>H</math> having property <math>P</math>.
:A graph property <math>P</math> is '''monotone''' if for any <math>G\subseteq H</math>, both on <math>n</math> vertices, <math>G</math> having property <math>P</math> implies <math>H</math> having property <math>P</math>.
}}
}}
By seeing the property as a function mapping a set of edges to a numerical value in <math>\{0,1\}</math>, a monotone property is just a monotonically increasing set function.
Some examples of monotone graph properties:
* Hamiltonian;
* <math>k</math>-clique;
* contains a subgraph isomorphic to some <math>H</math>;
* non-planar;
* chromatic number <math>>k</math> (i.e., not <math>k</math>-colorable);
* girth <math><\ell</math>.
From the last two properties, you can see another reason that the Erdős theorem is unintuitive.


Some examples of '''non-'''monotone graph properties:
* Eulerian;
* contains an ''induced'' subgraph isomorphic to some <math>H</math>;
For all monotone graph properties, we have the following theorem.
{{Theorem|Theorem|
{{Theorem|Theorem|
:Let <math>P</math> be a monotone graph property. Suppose <math>G_1=G(n,p_1)</math>, <math>G_2=G(n,p_2)</math>, and <math>0\le p_1\le p_2\le 1</math>. Then
:Let <math>P</math> be a monotone graph property. Suppose <math>G_1=G(n,p_1)</math>, <math>G_2=G(n,p_2)</math>, and <math>0\le p_1\le p_2\le 1</math>. Then
::<math>\Pr[P(G_1)]\le \Pr[P(G_2)]</math>.
::<math>\Pr[P(G_1)]\le \Pr[P(G_2)]</math>.
}}
Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of [http://en.wikipedia.org/wiki/Coupling_(probability) coupling], a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.
{{Proof|
For any <math>\{u,v\}\in{[n]\choose 2}</math>, let <math>X_{\{u,v\}}</math> be independently and uniformly distributed over the continuous interval <math>[0,1]</math>.  Let <math>uv\in G_1</math> if and only if <math>X_{\{u,v\}}\in[0,p_1]</math> and let <math>uv\in G_2</math> if and only if <math>X_{\{u,v\}}\in[0,p_2]</math>.
It is obvious that <math>G_1\sim G(n,p_1)\,</math> and <math>G_2\sim G(n,p_2)\,</math>. For any <math>\{u,v\}</math>, <math>uv\in G_1</math> means that <math>X_{\{u,v\}}\in[0,p_1]\subseteq [0,p_2]</math>, which implies that <math>uv\in G_2</math>. Thus, <math>G_1\subseteq G_2</math>.
Since <math>P</math> is monotone, <math>P(G_1)=1</math> implies <math>P(G_2)</math>. Thus,
:<math>\Pr[P(G_1)=1]\le \Pr[P(G_2)=1]</math>.
}}
}}


=== Threshold phenomenon ===
=== Threshold phenomenon ===
One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph <math>G(n,p)</math> suddenly changes from almost always not having the property to almost always having the property as <math>p</math> grows in a very small range.
A monotone graph property <math>P</math> is said to have the '''threshold''' <math>p(n)</math> if
* when <math>p\ll p(n)</math>, <math>\Pr[P(G(n,p))]=0</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always does not have <math>P</math>); and
* when <math>p\gg p(n)</math>, <math>\Pr[P(G(n,p))]=1</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always has <math>P</math>).


The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).
{{Theorem|Theorem|
{{Theorem|Theorem|
:The threshold for a random graph <math>G(n,p)</math> to contain a 4-clique is <math>p=n^{2/3}</math>.
:The threshold for a random graph <math>G(n,p)</math> to contain a 4-clique is <math>p=n^{2/3}</math>.
}}
}}
We formulate the problem as such.
For any <math>4</math>-subset of vertices <math>S\in{V\choose 4}</math>, let <math>X_S</math> be the indicator random variable such that
:<math>
X_S=
\begin{cases}
1 & S\mbox{ is a clique},\\
0 &  \mbox{otherwise}.
\end{cases}
</math>
Let <math>X=\sum_{S\in{V\choose 4}}X_S</math> be the total number of 4-cliques in <math>G</math>.
It is sufficient to prove the following lemma.
{{Theorem|Lemma|
*If <math>p=o(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 0</math> as <math>n\rightarrow\infty</math>.
*If <math>p=\omega(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 1</math> as <math>n\rightarrow\infty</math>.
}}
{{Proof|
The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality).
Every 4-clique has 6 edges, thus for any <math>S\in{V\choose 4}</math>,
:<math>\mathbf{E}[X_S]=\Pr[X_S=1]=p^6</math>.
By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{S\in{V\choose 4}}\mathbf{E}[X_S]={n\choose 4}p^6</math>.
Applying Markov's inequality
:<math>\Pr[X\ge 1]\le \mathbf{E}[X]=O(n^4p^6)=o(1)</math>, if <math>p=o(n^{-2/3})</math>.
The first claim is proved.
To prove the second claim, it is equivalent to show that <math>\Pr[X=0]=o(1)</math> if <math>p=\omega(n^{-2/3})</math>. By the Chebyshev's inequality,
:<math>\Pr[X=0]\le\Pr[|X-\mathbf{E}[X]|\ge\mathbf{E}[X]]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}</math>,
where the variance is computed as
:<math>\mathbf{Var}[X]=\mathbf{Var}\left[\sum_{S\in{V\choose 4}}X_S\right]=\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]+\sum_{S,T\in{V\choose 4}, S\neq T}\mathbf{Cov}(X_S,X_T)</math>.
For any <math>S\in{V\choose 4}</math>,
:<math>\mathbf{Var}[X_S]=\mathbf{E}[X_S^2]-\mathbf{E}[X_S]^2\le \mathbf{E}[X_S^2]=\mathbf{E}[X_S]=p^6</math>. Thus the first term of above formula is <math>\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]=O(n^4p^6)</math>.


We now compute the covariances. For any <math>S,T\in{V\choose 4}</math> that <math>S\neq T</math>:
* Case.1: <math>|S\cap T|\le 1</math>, so <math>S</math> and <math>T</math> do not share any edges. <math>X_S</math> and <math>X_T</math> are independent, thus <math>\mathbf{Cov}(X_S,X_T)=0</math>.
* Case.2: <math>|S\cap T|= 2</math>, so <math>S</math> and <math>T</math> share an edge. Since <math>|S\cup T|=6</math>, there are <math>{n\choose 6}=O(n^6)</math> pairs of such <math>S</math> and <math>T</math>.
::<math>\mathbf{Cov}(X_S,X_T)=\mathbf{E}[X_SX_T]-\mathbf{E}[X_S]\mathbf{E}[X_T]\le\mathbf{E}[X_SX_T]=\Pr[X_S=1\wedge X_T=1]=p^{11}</math>
:since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is <math>O(n^6p^{11})</math>.
* Case.2: <math>|S\cap T|= 3</math>, so <math>S</math> and <math>T</math> share a triangle. Since <math>|S\cup T|=5</math>, there are <math>{n\choose 5}=O(n^5)</math> pairs of such <math>S</math> and <math>T</math>. By the same argument,
::<math>\mathbf{Cov}(X_S,X_T)\le\Pr[X_S=1\wedge X_T=1]=p^{9}</math>
:since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is <math>O(n^5p^{9})</math>.
Putting all these together,
:<math>\mathbf{Var}[X]=O(n^4p^6+n^6p^{11}+n^5p^{9}).</math>
And
:<math>\Pr[X=0]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3})</math>,
which is <math>o(1)</math> if <math>p=\omega(n^{-2/3})</math>. The second claim is also proved.
}}


The above theorem can be generalized to any "balanced" subgraphs.
{{Theorem|Definition|
{{Theorem|Definition|
* The '''density''' of a graph <math>G(V,E)</math>, denoted <math>\rho(G)\,</math>, is defined as <math>\rho(G)=\frac{|E|}{|V|}</math>.
* The '''density''' of a graph <math>G(V,E)</math>, denoted <math>\rho(G)\,</math>, is defined as <math>\rho(G)=\frac{|E|}{|V|}</math>.
* A graph <math>G(V,E)</math> is '''balanced''' if <math>\rho(H)\le \rho(G)</math> for all subgraphs <math>H</math> of <math>G</math>.
* A graph <math>G(V,E)</math> is '''balanced''' if <math>\rho(H)\le \rho(G)</math> for all subgraphs <math>H</math> of <math>G</math>.
}}
}}
Cliques are balanced, because <math>\frac{{k\choose 2}}{k}\le \frac{{n\choose 2}}{n}</math> for any <math>k\le n</math>. The threshold for 4-clique is a direct corollary of the following general theorem.


{{Theorem|Theorem (Erdős–Rényi 1960)|
{{Theorem|Theorem (Erdős–Rényi 1960)|
:Let <math>H</math> be a balanced graph with <math>k</math> vertices and <math>\ell</math> edges. The threshold for the property that a random graph <math>G(n,p)</math> contains a (not necessarily induced) subgraph isomorphic to <math>H</math> is <math>p=n^{-k/\ell}\,</math>.
:Let <math>H</math> be a balanced graph with <math>k</math> vertices and <math>\ell</math> edges. The threshold for the property that a random graph <math>G(n,p)</math> contains a (not necessarily induced) subgraph isomorphic to <math>H</math> is <math>p=n^{-k/\ell}\,</math>.
}}
}}
{{Prooftitle|Sketch of proof.|
For any <math>S\in{V\choose k}</math>, let <math>X_S</math> indicate whether <math>G_S</math> (the subgraph of <math>G</math> induced by <math>S</math>) contain a subgraph <math>H</math>. Then
:<math>p^{\ell}\le\mathbf{E}[X_S]\le k!p^{\ell}</math>, since there are at most <math>k!</math> ways to match the substructure.
Note that <math>k</math> does not depend on <math>n</math>. Thus, <math>\mathbf{E}[X_S]=\Theta(p^{\ell})</math>. Let <math>X=\sum_{S\in{V\choose k}}X_S</math> be the number of <math>H</math>-subgraphs.
:<math>\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>.
By Markov's inequality, <math>\Pr[X\ge 1]\le \mathbf{E}[X]=\Theta(n^kp^{\ell})</math> which is <math>o(1)</math> when <math>p\ll n^{-\ell/k}</math>.
By Chebyshev's inequality, <math>\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}</math> where
:<math>\mathbf{Var}[X]=\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]+\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)</math>.
The first term <math>\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]\le \sum_{S\in{V\choose k}}\mathbf{E}[X_S^2]= \sum_{S\in{V\choose k}}\mathbf{E}[X_S]=\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>.


=== Concentration ===
For the covariances, <math>\mathbf{Cov}(X_S,X_T)\neq 0</math> only if <math>|S\cap T|=i</math> for <math>2\le i\le k-1</math>. Note that <math>|S\cap T|=i</math> implies that <math>|S\cup T|=2k-i</math>. And for balanced <math>H</math>, the number of edges of interest in <math>S</math> and <math>T</math> is <math>2\ell-i\rho(H_{S\cap T})\ge 2\ell-i\rho(H)=2\ell-i\ell/k</math>. Thus, <math>\mathbf{Cov}(X_S,X_T)\le\mathbf{E}[X_SX_T]\le p^{2\ell-i\ell/k}</math>. And,
{{Theorem|Definition|
 
:The '''clique number''' of a graph <math>G(V,E)</math>, denoted <math>\omega(G)</math>, is the size of the largest clique in <math>G</math>. Formally,
:<math>\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)=\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})</math>
::<math>\omega(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\in E\}</math>.
Therefore, when <math>p\gg n^{-\ell/k}</math>,
:<math>
\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}\le \frac{\Theta(n^kp^{\ell})+\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})}{\Theta(n^{2k}p^{2\ell})}=\Theta(n^{-k}p^{-\ell})+\sum_{i=2}^{k-1}O(n^{-i}p^{-i\ell/k})=o(1)</math>.
}}
}}


{{Theorem|Theorem (Bollobás-Erdős 1976; Matula 1976)|
== Small World Networks==
:Let <math>G=G(n,\frac{1}{2})</math>. There exists <math>k=k(n)</math> so that
Read the introduction of Kleinberg's paper (listed in the references).
::<math>\Pr[\omega(G)=k\mbox{ or }k+1]\rightarrow 1</math> as <math>n\rightarrow\infty</math>.
}}


{{Theorem|Theorem (Bollobás 1988)|
== References ==
:Let <math>G=G(n,\frac{1}{2})</math>. Almost always
:('''声明:''' 资料受版权保护, 仅用于教学.)
::<math>\chi(G)\sim\frac{n}{2\log_2 n}</math>.
:('''Disclaimer:''' The following copyrighted materials are meant for educational uses only.)
}}


== Small-World Networks ==
* Diestel. ''Graph Theory, 2nd Edition''. Springer-Verlag 2000. [[media:Diestel2ed_chap11.pdf|Chapter 11]].
* Alon and Spencer. ''The Probabilistic Method, 3rd Edition.'' Wiley, 2008. [[media:TPM_girth_chromatic.pdf|"The Probabilistic Lens: High Girth and High Chromatic Number"]], and [[media:TPM_chap4.pdf|Chapter 4]].
*J. Kleinberg. The small-world phenomenon: An algorithmic perspective. ''Proc. 32nd ACM Symposium on Theory of Computing'' (STOC), 2000.

Latest revision as of 08:30, 12 October 2010

Tail Inequalities

Markov's inequality

One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation.

Theorem (Markov's Inequality)
Let [math]\displaystyle{ X }[/math] be a random variable assuming only nonnegative values. Then, for all [math]\displaystyle{ t\gt 0 }[/math],
[math]\displaystyle{ \begin{align} \Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}. \end{align} }[/math]
Proof.
Let [math]\displaystyle{ Y }[/math] be the indicator such that
[math]\displaystyle{ \begin{align} Y &= \begin{cases} 1 & \mbox{if }X\ge t,\\ 0 & \mbox{otherwise.} \end{cases} \end{align} }[/math]

It holds that [math]\displaystyle{ Y\le\frac{X}{t} }[/math]. Since [math]\displaystyle{ Y }[/math] is 0-1 valued, [math]\displaystyle{ \mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t] }[/math]. Therefore,

[math]\displaystyle{ \Pr[X\ge t] = \mathbf{E}[Y] \le \mathbf{E}\left[\frac{X}{t}\right] =\frac{\mathbf{E}[X]}{t}. }[/math]
[math]\displaystyle{ \square }[/math]

For any random variable [math]\displaystyle{ X }[/math], for an arbitrary non-negative real function [math]\displaystyle{ h }[/math], the [math]\displaystyle{ h(X) }[/math] is a non-negative random variable. Applying Markov's inequality, we directly have that

[math]\displaystyle{ \Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}. }[/math]

This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function [math]\displaystyle{ h }[/math] which extracts more information about the random variable, we can prove sharper tail inequalities.

Variance

Definition (variance)
The variance of a random variable [math]\displaystyle{ X }[/math] is defined as
[math]\displaystyle{ \begin{align} \mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2. \end{align} }[/math]
The standard deviation of random variable [math]\displaystyle{ X }[/math] is
[math]\displaystyle{ \delta[X]=\sqrt{\mathbf{Var}[X]}. }[/math]

We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance.

Definition (covariance)
The covariance of two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] is
[math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]=\mathbf{E}[XY]-\mathbf{E}[X]\mathbf{E}[Y]. \end{align} }[/math]

We have the following theorem for the variance of sum.

Theorem
For any two random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
[math]\displaystyle{ \begin{align} \mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y). \end{align} }[/math]
Generally, for any random variables [math]\displaystyle{ X_1,X_2,\ldots,X_n }[/math],
[math]\displaystyle{ \begin{align} \mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j). \end{align} }[/math]
Proof.
The equation for two variables is directly due to the definition of variance and covariance. The equation for [math]\displaystyle{ n }[/math] variables can be deduced from the equation for two variables.
[math]\displaystyle{ \square }[/math]

We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity.

Theorem
For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
Proof.
[math]\displaystyle{ \begin{align} \mathbf{E}[X\cdot Y] &= \sum_{x,y}xy\Pr[X=x\wedge Y=y]\\ &= \sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\ &= \sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\ &= \mathbf{E}[X]\cdot\mathbf{E}[Y]. \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

With the above theorem, we can show that the covariance of two independent variables is always zero.

Theorem
For any two independent random variables [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math],
[math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y)=0. \end{align} }[/math]
Proof.
[math]\displaystyle{ \begin{align} \mathbf{Cov}(X,Y) &=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\ &= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\ &=0. \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

Chebyshev's inequality

With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality.

Theorem (Chebyshev's Inequality)
For any [math]\displaystyle{ t\gt 0 }[/math],
[math]\displaystyle{ \begin{align} \Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}. \end{align} }[/math]
Proof.
Observe that
[math]\displaystyle{ \Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2]. }[/math]

Since [math]\displaystyle{ (X-\mathbf{E}[X])^2 }[/math] is a nonnegative random variable, we can apply Markov's inequality, such that

[math]\displaystyle{ \Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le \frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2} =\frac{\mathbf{Var}[X]}{t^2}. }[/math]
[math]\displaystyle{ \square }[/math]

Erdős–Rényi Random Graphs

Consider a graph [math]\displaystyle{ G(V,E) }[/math] which is randomly generated as:

  • [math]\displaystyle{ |V|=n }[/math];
  • [math]\displaystyle{ \forall \{u,v\}\in{V\choose 2} }[/math], [math]\displaystyle{ uv\in E }[/math] independently with probability [math]\displaystyle{ p }[/math].

Such graph is denoted as [math]\displaystyle{ G(n,p) }[/math]. This is called the Erdős–Rényi model or [math]\displaystyle{ G(n,p) }[/math] model for random graphs.

Informally, the presence of every edge of [math]\displaystyle{ G(n,p) }[/math] is determined by an independent coin flipping (with probability of HEADs [math]\displaystyle{ p }[/math]).

Coloring large-girth graphs

The girth of a graph is the length of the shortest cycle of the graph.

Definition

Let [math]\displaystyle{ G(V,E) }[/math] be an undirected graph.

  • A cycle of length [math]\displaystyle{ k }[/math] in [math]\displaystyle{ G }[/math] is a sequence of distinct vertices [math]\displaystyle{ v_1,v_2,\ldots,v_{k} }[/math] such that [math]\displaystyle{ v_iv_{i+1}\in E }[/math] for all [math]\displaystyle{ i=1,2,\ldots,k-1 }[/math] and [math]\displaystyle{ v_kv_1\in E }[/math].
  • The girth of [math]\displaystyle{ G }[/math], dented [math]\displaystyle{ g(G) }[/math], is the length of the shortest cycle in [math]\displaystyle{ G }[/math].

The chromatic number of a graph is the minimum number of colors with which the graph can be properly colored.

Definition (chromatic number)
  • The chromatic number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \chi(G) }[/math], is the minimal number of colors which we need to color the vertices of [math]\displaystyle{ G }[/math] so that no two adjacent vertices have the same color. Formally,
[math]\displaystyle{ \chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\} }[/math].

In 1959, Erdős proved the following theorem: for any fixed [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math], there exists a finite graph with girth at least [math]\displaystyle{ k }[/math] and chromatic number at least [math]\displaystyle{ \ell }[/math]. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.

Theorem (Erdős 1959)
For all [math]\displaystyle{ k,\ell }[/math] there exists a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k\, }[/math].

It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.

Definition (independence number)
  • The independence number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \alpha(G) }[/math], is the size of the largest independent set in [math]\displaystyle{ G }[/math]. Formally,
[math]\displaystyle{ \alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\} }[/math].

We observe the following relationship between the chromatic number and the independence number.

Lemma
For any [math]\displaystyle{ n }[/math]-vertex graph,
[math]\displaystyle{ \chi(G)\ge\frac{n}{\alpha(G)} }[/math].
Proof.
  • In the optimal coloring, [math]\displaystyle{ n }[/math] vertices are partitioned into [math]\displaystyle{ \chi(G) }[/math] color classes according to the vertex color.
  • Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
  • By the pigeonhole principle, there is a color class (hence an independent set) of size [math]\displaystyle{ \frac{n}{\chi(G)} }[/math]. Therefore, [math]\displaystyle{ \alpha(G)\ge\frac{n}{\chi(G)} }[/math].

The lemma follows.

[math]\displaystyle{ \square }[/math]

Therefore, it is sufficient to prove that [math]\displaystyle{ \alpha(G)\le\frac{n}{k} }[/math] and [math]\displaystyle{ g(G)\gt \ell }[/math].

Proof of Erdős theorem

Fix [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Let [math]\displaystyle{ G }[/math] be [math]\displaystyle{ G(n,p) }[/math] with [math]\displaystyle{ p=n^{\theta-1} }[/math].

For any length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math], let [math]\displaystyle{ X_\sigma }[/math] be the indicator random variable such that

[math]\displaystyle{ X_\sigma= \begin{cases} 1 & \sigma\mbox{ is a cycle in }G,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

The number of cycles of length at most [math]\displaystyle{ \ell }[/math] in graph [math]\displaystyle{ G }[/math] is

[math]\displaystyle{ X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma }[/math].

For any particular length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math],

[math]\displaystyle{ \mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i} }[/math].

For any [math]\displaystyle{ 3\le i\le n }[/math], the number of length-[math]\displaystyle{ i }[/math] simple cycle is [math]\displaystyle{ \frac{n(n-1)\cdots (n-i+1)}{2i!} }[/math]. By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n) }[/math].

Applying Markov's inequality,

[math]\displaystyle{ \Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1). }[/math]

Therefore, with high probability the random graph has less than [math]\displaystyle{ n/2 }[/math] short cycles.

Now we proceed to analyze the independence number. Let [math]\displaystyle{ m=\left\lceil\frac{3\ln n}{p}\right\rceil }[/math], so that

[math]\displaystyle{ \begin{align} \Pr[\alpha(G)\ge m] &\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\ &\le{n\choose m}(1-p)^{m\choose 2}\\ &\lt n^m\mathrm{e}^{-p{m\choose 2}}\\ &=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1) \end{align} }[/math]

The probability that either of the above events occurs is

[math]\displaystyle{ \begin{align} \Pr\left[X\lt \frac{n}{2}\vee \alpha(G)\lt m\right] \le \Pr\left[X\lt \frac{n}{2}\right]+\Pr\left[\alpha(G)\lt m\right] =o(1). \end{align} }[/math]

Therefore, there exists a graph [math]\displaystyle{ G }[/math] with less than [math]\displaystyle{ n/2 }[/math] "short" cycles, i.e., cycles of length at most [math]\displaystyle{ \ell }[/math], and with [math]\displaystyle{ \alpha(G)\lt m\le 3n^{1-\theta}\ln n }[/math].

Take each "short" cycle in [math]\displaystyle{ G }[/math] and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph [math]\displaystyle{ G' }[/math] which has no short cycles, hence the girth [math]\displaystyle{ g(G')\ge\ell }[/math]. And [math]\displaystyle{ G' }[/math] has at least [math]\displaystyle{ n/2 }[/math] vertices, because at most [math]\displaystyle{ n/2 }[/math] vertices are removed.

Notice that removing vertices cannot makes [math]\displaystyle{ \alpha(G) }[/math] grow. It holds that [math]\displaystyle{ \alpha(G')\le\alpha(G) }[/math]. Thus

[math]\displaystyle{ \chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n} }[/math].

The theorem is proved by taking [math]\displaystyle{ n }[/math] sufficiently large so that this value is greater than [math]\displaystyle{ k }[/math].

[math]\displaystyle{ \square }[/math]

The proof contains a very simple procedure which for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math] generates such a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math]. The procedure is as such:

  • Fix some [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Choose sufficiently large [math]\displaystyle{ n }[/math] with [math]\displaystyle{ \frac{n^\theta}{6\ln n}\gt k }[/math], and let [math]\displaystyle{ p=n^{\theta-1} }[/math].
  • Generate a random graph [math]\displaystyle{ G }[/math] as [math]\displaystyle{ G(n,p) }[/math].
  • For each cycle of length at most [math]\displaystyle{ \ell }[/math] in [math]\displaystyle{ G }[/math], remove a vertex from the cycle.

The resulting graph [math]\displaystyle{ G' }[/math] satisfying that [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math] with high probability.

Monotone properties

A graph property is a predicate of graph which depends only on the structure of the graph.

Definition
Let [math]\displaystyle{ \mathcal{G}_n=2^{V\choose 2} }[/math], where [math]\displaystyle{ |V|=n }[/math], be the set of all possible graphs on [math]\displaystyle{ n }[/math] vertices. A graph property is a boolean function [math]\displaystyle{ P:\mathcal{G}_n\rightarrow\{0,1\} }[/math] which is invariant under permutation of vertices, i.e. [math]\displaystyle{ P(G)=P(H) }[/math] whenever [math]\displaystyle{ G }[/math] is isomorphic to [math]\displaystyle{ H }[/math].

We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.

Definition
A graph property [math]\displaystyle{ P }[/math] is monotone if for any [math]\displaystyle{ G\subseteq H }[/math], both on [math]\displaystyle{ n }[/math] vertices, [math]\displaystyle{ G }[/math] having property [math]\displaystyle{ P }[/math] implies [math]\displaystyle{ H }[/math] having property [math]\displaystyle{ P }[/math].

By seeing the property as a function mapping a set of edges to a numerical value in [math]\displaystyle{ \{0,1\} }[/math], a monotone property is just a monotonically increasing set function.

Some examples of monotone graph properties:

  • Hamiltonian;
  • [math]\displaystyle{ k }[/math]-clique;
  • contains a subgraph isomorphic to some [math]\displaystyle{ H }[/math];
  • non-planar;
  • chromatic number [math]\displaystyle{ \gt k }[/math] (i.e., not [math]\displaystyle{ k }[/math]-colorable);
  • girth [math]\displaystyle{ \lt \ell }[/math].

From the last two properties, you can see another reason that the Erdős theorem is unintuitive.

Some examples of non-monotone graph properties:

  • Eulerian;
  • contains an induced subgraph isomorphic to some [math]\displaystyle{ H }[/math];

For all monotone graph properties, we have the following theorem.

Theorem
Let [math]\displaystyle{ P }[/math] be a monotone graph property. Suppose [math]\displaystyle{ G_1=G(n,p_1) }[/math], [math]\displaystyle{ G_2=G(n,p_2) }[/math], and [math]\displaystyle{ 0\le p_1\le p_2\le 1 }[/math]. Then
[math]\displaystyle{ \Pr[P(G_1)]\le \Pr[P(G_2)] }[/math].

Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of coupling, a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.

Proof.

For any [math]\displaystyle{ \{u,v\}\in{[n]\choose 2} }[/math], let [math]\displaystyle{ X_{\{u,v\}} }[/math] be independently and uniformly distributed over the continuous interval [math]\displaystyle{ [0,1] }[/math]. Let [math]\displaystyle{ uv\in G_1 }[/math] if and only if [math]\displaystyle{ X_{\{u,v\}}\in[0,p_1] }[/math] and let [math]\displaystyle{ uv\in G_2 }[/math] if and only if [math]\displaystyle{ X_{\{u,v\}}\in[0,p_2] }[/math].

It is obvious that [math]\displaystyle{ G_1\sim G(n,p_1)\, }[/math] and [math]\displaystyle{ G_2\sim G(n,p_2)\, }[/math]. For any [math]\displaystyle{ \{u,v\} }[/math], [math]\displaystyle{ uv\in G_1 }[/math] means that [math]\displaystyle{ X_{\{u,v\}}\in[0,p_1]\subseteq [0,p_2] }[/math], which implies that [math]\displaystyle{ uv\in G_2 }[/math]. Thus, [math]\displaystyle{ G_1\subseteq G_2 }[/math].

Since [math]\displaystyle{ P }[/math] is monotone, [math]\displaystyle{ P(G_1)=1 }[/math] implies [math]\displaystyle{ P(G_2) }[/math]. Thus,

[math]\displaystyle{ \Pr[P(G_1)=1]\le \Pr[P(G_2)=1] }[/math].
[math]\displaystyle{ \square }[/math]

Threshold phenomenon

One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph [math]\displaystyle{ G(n,p) }[/math] suddenly changes from almost always not having the property to almost always having the property as [math]\displaystyle{ p }[/math] grows in a very small range.

A monotone graph property [math]\displaystyle{ P }[/math] is said to have the threshold [math]\displaystyle{ p(n) }[/math] if

  • when [math]\displaystyle{ p\ll p(n) }[/math], [math]\displaystyle{ \Pr[P(G(n,p))]=0 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math] (also called [math]\displaystyle{ G(n,p) }[/math] almost always does not have [math]\displaystyle{ P }[/math]); and
  • when [math]\displaystyle{ p\gg p(n) }[/math], [math]\displaystyle{ \Pr[P(G(n,p))]=1 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math] (also called [math]\displaystyle{ G(n,p) }[/math] almost always has [math]\displaystyle{ P }[/math]).

The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).

Theorem
The threshold for a random graph [math]\displaystyle{ G(n,p) }[/math] to contain a 4-clique is [math]\displaystyle{ p=n^{2/3} }[/math].

We formulate the problem as such. For any [math]\displaystyle{ 4 }[/math]-subset of vertices [math]\displaystyle{ S\in{V\choose 4} }[/math], let [math]\displaystyle{ X_S }[/math] be the indicator random variable such that

[math]\displaystyle{ X_S= \begin{cases} 1 & S\mbox{ is a clique},\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

Let [math]\displaystyle{ X=\sum_{S\in{V\choose 4}}X_S }[/math] be the total number of 4-cliques in [math]\displaystyle{ G }[/math].

It is sufficient to prove the following lemma.

Lemma
  • If [math]\displaystyle{ p=o(n^{-2/3}) }[/math], then [math]\displaystyle{ \Pr[X\ge 1]\rightarrow 0 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math].
  • If [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math], then [math]\displaystyle{ \Pr[X\ge 1]\rightarrow 1 }[/math] as [math]\displaystyle{ n\rightarrow\infty }[/math].
Proof.

The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality).

Every 4-clique has 6 edges, thus for any [math]\displaystyle{ S\in{V\choose 4} }[/math],

[math]\displaystyle{ \mathbf{E}[X_S]=\Pr[X_S=1]=p^6 }[/math].

By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{S\in{V\choose 4}}\mathbf{E}[X_S]={n\choose 4}p^6 }[/math].

Applying Markov's inequality

[math]\displaystyle{ \Pr[X\ge 1]\le \mathbf{E}[X]=O(n^4p^6)=o(1) }[/math], if [math]\displaystyle{ p=o(n^{-2/3}) }[/math].

The first claim is proved.

To prove the second claim, it is equivalent to show that [math]\displaystyle{ \Pr[X=0]=o(1) }[/math] if [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math]. By the Chebyshev's inequality,

[math]\displaystyle{ \Pr[X=0]\le\Pr[|X-\mathbf{E}[X]|\ge\mathbf{E}[X]]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2} }[/math],

where the variance is computed as

[math]\displaystyle{ \mathbf{Var}[X]=\mathbf{Var}\left[\sum_{S\in{V\choose 4}}X_S\right]=\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]+\sum_{S,T\in{V\choose 4}, S\neq T}\mathbf{Cov}(X_S,X_T) }[/math].

For any [math]\displaystyle{ S\in{V\choose 4} }[/math],

[math]\displaystyle{ \mathbf{Var}[X_S]=\mathbf{E}[X_S^2]-\mathbf{E}[X_S]^2\le \mathbf{E}[X_S^2]=\mathbf{E}[X_S]=p^6 }[/math]. Thus the first term of above formula is [math]\displaystyle{ \sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]=O(n^4p^6) }[/math].

We now compute the covariances. For any [math]\displaystyle{ S,T\in{V\choose 4} }[/math] that [math]\displaystyle{ S\neq T }[/math]:

  • Case.1: [math]\displaystyle{ |S\cap T|\le 1 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] do not share any edges. [math]\displaystyle{ X_S }[/math] and [math]\displaystyle{ X_T }[/math] are independent, thus [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)=0 }[/math].
  • Case.2: [math]\displaystyle{ |S\cap T|= 2 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] share an edge. Since [math]\displaystyle{ |S\cup T|=6 }[/math], there are [math]\displaystyle{ {n\choose 6}=O(n^6) }[/math] pairs of such [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math].
[math]\displaystyle{ \mathbf{Cov}(X_S,X_T)=\mathbf{E}[X_SX_T]-\mathbf{E}[X_S]\mathbf{E}[X_T]\le\mathbf{E}[X_SX_T]=\Pr[X_S=1\wedge X_T=1]=p^{11} }[/math]
since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is [math]\displaystyle{ O(n^6p^{11}) }[/math].
  • Case.2: [math]\displaystyle{ |S\cap T|= 3 }[/math], so [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] share a triangle. Since [math]\displaystyle{ |S\cup T|=5 }[/math], there are [math]\displaystyle{ {n\choose 5}=O(n^5) }[/math] pairs of such [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math]. By the same argument,
[math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\le\Pr[X_S=1\wedge X_T=1]=p^{9} }[/math]
since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is [math]\displaystyle{ O(n^5p^{9}) }[/math].

Putting all these together,

[math]\displaystyle{ \mathbf{Var}[X]=O(n^4p^6+n^6p^{11}+n^5p^{9}). }[/math]

And

[math]\displaystyle{ \Pr[X=0]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3}) }[/math],

which is [math]\displaystyle{ o(1) }[/math] if [math]\displaystyle{ p=\omega(n^{-2/3}) }[/math]. The second claim is also proved.

[math]\displaystyle{ \square }[/math]

The above theorem can be generalized to any "balanced" subgraphs.

Definition
  • The density of a graph [math]\displaystyle{ G(V,E) }[/math], denoted [math]\displaystyle{ \rho(G)\, }[/math], is defined as [math]\displaystyle{ \rho(G)=\frac{|E|}{|V|} }[/math].
  • A graph [math]\displaystyle{ G(V,E) }[/math] is balanced if [math]\displaystyle{ \rho(H)\le \rho(G) }[/math] for all subgraphs [math]\displaystyle{ H }[/math] of [math]\displaystyle{ G }[/math].

Cliques are balanced, because [math]\displaystyle{ \frac{{k\choose 2}}{k}\le \frac{{n\choose 2}}{n} }[/math] for any [math]\displaystyle{ k\le n }[/math]. The threshold for 4-clique is a direct corollary of the following general theorem.

Theorem (Erdős–Rényi 1960)
Let [math]\displaystyle{ H }[/math] be a balanced graph with [math]\displaystyle{ k }[/math] vertices and [math]\displaystyle{ \ell }[/math] edges. The threshold for the property that a random graph [math]\displaystyle{ G(n,p) }[/math] contains a (not necessarily induced) subgraph isomorphic to [math]\displaystyle{ H }[/math] is [math]\displaystyle{ p=n^{-k/\ell}\, }[/math].
Sketch of proof.

For any [math]\displaystyle{ S\in{V\choose k} }[/math], let [math]\displaystyle{ X_S }[/math] indicate whether [math]\displaystyle{ G_S }[/math] (the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]) contain a subgraph [math]\displaystyle{ H }[/math]. Then

[math]\displaystyle{ p^{\ell}\le\mathbf{E}[X_S]\le k!p^{\ell} }[/math], since there are at most [math]\displaystyle{ k! }[/math] ways to match the substructure.

Note that [math]\displaystyle{ k }[/math] does not depend on [math]\displaystyle{ n }[/math]. Thus, [math]\displaystyle{ \mathbf{E}[X_S]=\Theta(p^{\ell}) }[/math]. Let [math]\displaystyle{ X=\sum_{S\in{V\choose k}}X_S }[/math] be the number of [math]\displaystyle{ H }[/math]-subgraphs.

[math]\displaystyle{ \mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math].

By Markov's inequality, [math]\displaystyle{ \Pr[X\ge 1]\le \mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math] which is [math]\displaystyle{ o(1) }[/math] when [math]\displaystyle{ p\ll n^{-\ell/k} }[/math].

By Chebyshev's inequality, [math]\displaystyle{ \Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2} }[/math] where

[math]\displaystyle{ \mathbf{Var}[X]=\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]+\sum_{S\neq T}\mathbf{Cov}(X_S,X_T) }[/math].

The first term [math]\displaystyle{ \sum_{S\in{V\choose k}}\mathbf{Var}[X_S]\le \sum_{S\in{V\choose k}}\mathbf{E}[X_S^2]= \sum_{S\in{V\choose k}}\mathbf{E}[X_S]=\mathbf{E}[X]=\Theta(n^kp^{\ell}) }[/math].

For the covariances, [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\neq 0 }[/math] only if [math]\displaystyle{ |S\cap T|=i }[/math] for [math]\displaystyle{ 2\le i\le k-1 }[/math]. Note that [math]\displaystyle{ |S\cap T|=i }[/math] implies that [math]\displaystyle{ |S\cup T|=2k-i }[/math]. And for balanced [math]\displaystyle{ H }[/math], the number of edges of interest in [math]\displaystyle{ S }[/math] and [math]\displaystyle{ T }[/math] is [math]\displaystyle{ 2\ell-i\rho(H_{S\cap T})\ge 2\ell-i\rho(H)=2\ell-i\ell/k }[/math]. Thus, [math]\displaystyle{ \mathbf{Cov}(X_S,X_T)\le\mathbf{E}[X_SX_T]\le p^{2\ell-i\ell/k} }[/math]. And,

[math]\displaystyle{ \sum_{S\neq T}\mathbf{Cov}(X_S,X_T)=\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k}) }[/math]

Therefore, when [math]\displaystyle{ p\gg n^{-\ell/k} }[/math],

[math]\displaystyle{ \Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}\le \frac{\Theta(n^kp^{\ell})+\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})}{\Theta(n^{2k}p^{2\ell})}=\Theta(n^{-k}p^{-\ell})+\sum_{i=2}^{k-1}O(n^{-i}p^{-i\ell/k})=o(1) }[/math].
[math]\displaystyle{ \square }[/math]

Small World Networks

Read the introduction of Kleinberg's paper (listed in the references).

References

(声明: 资料受版权保护, 仅用于教学.)
(Disclaimer: The following copyrighted materials are meant for educational uses only.)