随机算法 (Spring 2013)/Threshold and Concentration and 组合数学 (Spring 2013)/Cayley's formula: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
No edit summary
 
imported>Etone
 
Line 1: Line 1:
= Erdős–Rényi Random Graphs =
= Cayley's Formula =
Consider a graph <math>G(V,E)</math> which is randomly generated as:
We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to [http://en.wikipedia.org/wiki/Arthur_Cayley Cayley] in 1889. The theorem is often referred by the name [http://en.wikipedia.org/wiki/Cayley's_formula Cayley's formula].
* <math>|V|=n</math>;
* <math>\forall \{u,v\}\in{V\choose 2}</math>, <math>uv\in E</math> independently with probability <math>p</math>.


Such graph is denoted as '''<math>G(n,p)</math>'''. This is called the '''Erdős–Rényi model''' or '''<math>G(n,p)</math> model''' for random graphs.
{{Theorem|Cayley's formula for trees|
 
: There are <math>n^{n-2}</math> different trees on <math>n</math> distinct vertices.
Informally, the presence of every edge of <math>G(n,p)</math> is determined by an independent coin flipping (with probability of HEADs <math>p</math>).
 
==Monotone properties ==
A graph property is a predicate of graph which depends only on the structure of the graph.
{{Theorem|Definition|
:Let <math>\mathcal{G}_n=2^{V\choose 2}</math>, where <math>|V|=n</math>, be the set of all possible graphs on <math>n</math> vertices. A '''graph property''' is a boolean function <math>P:\mathcal{G}_n\rightarrow\{0,1\}</math> which is invariant under permutation of vertices, i.e. <math>P(G)=P(H)</math> whenever <math>G</math> is isomorphic to <math>H</math>.
}}
 
We are interested in the monotone properties, i.e., those properties that adding edges will not change a graph from having the property to not having the property.
{{Theorem|Definition|
:A graph property <math>P</math> is '''monotone''' if for any <math>G\subseteq H</math>, both on <math>n</math> vertices, <math>G</math> having property <math>P</math> implies <math>H</math> having property <math>P</math>.
}}
}}
By seeing the property as a function mapping a set of edges to a numerical value in <math>\{0,1\}</math>, a monotone property is just a monotonically increasing set function.


Some examples of monotone graph properties:
The theorem has several proofs. Classical methods include the bijection which encodes a tree by a [http://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence Prüfer code], through the [http://en.wikipedia.org/wiki/Kirchhoff's_matrix_tree_theorem Kirchhoff's matrix tree theorem], and by double counting.
* Hamiltonian;
* <math>k</math>-clique;
* contains a subgraph isomorphic to some <math>H</math>;
* non-planar;
* chromatic number <math>>k</math> (i.e., not <math>k</math>-colorable);
* girth <math><\ell</math>.
From the last two properties, you can see another reason that the Erdős theorem is unintuitive.


Some examples of '''non-'''monotone graph properties:
== Prüfer code ==
* Eulerian;
The Prüfer code encodes a labeled tree to a sequence of labels. This gives a bijections between trees and tuples.
* contains an ''induced'' subgraph isomorphic to some <math>H</math>;
 
For all monotone graph properties, we have the following theorem.
{{Theorem|Theorem|
:Let <math>P</math> be a monotone graph property. Suppose <math>G_1=G(n,p_1)</math>, <math>G_2=G(n,p_2)</math>, and <math>0\le p_1\le p_2\le 1</math>. Then
::<math>\Pr[P(G_1)]\le \Pr[P(G_2)]</math>.
}}
Although the statement in the theorem looks very natural, it is difficult to evaluate the probability that a random graph has some property. However, the theorem can be very easily proved by using the idea of [http://en.wikipedia.org/wiki/Coupling_(probability) coupling], a proof technique in probability theory which compare two unrelated random variables by forcing them to be related.
{{Proof|
For any <math>\{u,v\}\in{[n]\choose 2}</math>, let <math>X_{\{u,v\}}</math> be independently and uniformly distributed over the continuous interval <math>[0,1]</math>.  Let <math>uv\in G_1</math> if and only if <math>X_{\{u,v\}}\in[0,p_1]</math> and let <math>uv\in G_2</math> if and only if <math>X_{\{u,v\}}\in[0,p_2]</math>.


It is obvious that <math>G_1\sim G(n,p_1)\,</math> and <math>G_2\sim G(n,p_2)\,</math>. For any <math>\{u,v\}</math>, <math>uv\in G_1</math> means that <math>X_{\{u,v\}}\in[0,p_1]\subseteq [0,p_2]</math>, which implies that <math>uv\in G_2</math>. Thus, <math>G_1\subseteq G_2</math>.
In a tree, the vertices of degree 1 are called leaves. It is easy to see that:
* each tree has at least two leaves; and
* after removing a leaf (along with the edge adjacent to it) from a tree, the resulting graph is still a tree.  


Since <math>P</math> is monotone, <math>P(G_1)=1</math> implies <math>P(G_2)</math>. Thus,  
The following algorithm transforms a tree <math>T</math> of <math>n</math> vertices <math>1,2,\ldots,n</math>, to a tuple <math>(v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2}</math>.
:<math>\Pr[P(G_1)=1]\le \Pr[P(G_2)=1]</math>.
{{Theorem| Prüfer code (encoder)|
:'''Input''': A tree <math>T</math> of <math>n</math> distinct vertices, labeled by <math>1,2,\ldots,n</math>.
:
:let <math>T_1=T</math>;
:for <math>i=1</math> to <math>n-1</math>, do
::let <math>u_i</math> be the leaf in <math>T_i</math> with the smallest label, and <math>v_i</math> be its neighbor;
::let <math>T_{i+1}</math> be the new tree obtained from deleting the leaf <math>u_i</math> from <math>T_i</math>;
:end
:return <math>(v_1,v_2,\ldots,v_{n-2})</math>;
}}
}}


== Threshold phenomenon ==
It is trivial to observe the following lemma:
One of the most fascinating phenomenon of random graphs is that for so many natural graph properties, the random graph <math>G(n,p)</math> suddenly changes from almost always not having the property to almost always having the property as <math>p</math> grows in a very small range.
{{Theorem|Lemma 1|
 
:For each <math>1\le i\le n-1</math>, <math>T_i</math> is a tree of <math>n-i+1</math> vertices. In particular, the vertices of <math>T_i</math> are  <math>u_i,u_{i+1},\ldots,u_{n-1},v_{n-1}</math>, and the edges of <math>T_i</math> are precisely <math>\{u_j,v_j\}</math>, <math>i\le j\le n-1</math>.
A monotone graph property <math>P</math> is said to have the '''threshold''' <math>p(n)</math> if
* when <math>p\ll p(n)</math>, <math>\Pr[P(G(n,p))]=0</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always does not have <math>P</math>); and
* when <math>p\gg p(n)</math>, <math>\Pr[P(G(n,p))]=1</math> as <math>n\rightarrow\infty</math> (also called <math>G(n,p)</math> almost always has <math>P</math>).
 
The classic method for proving the threshold is the so-called second moment method (Chebyshev's inequality).
=== Threshold for 4-clique ===
{{Theorem|Theorem|
:The threshold for a random graph <math>G(n,p)</math> to contain a 4-clique is <math>p=n^{2/3}</math>.
}}
}}
We formulate the problem as such.
For any <math>4</math>-subset of vertices <math>S\in{V\choose 4}</math>, let <math>X_S</math> be the indicator random variable such that
:<math>
X_S=
\begin{cases}
1 & S\mbox{ is a clique},\\
0 &  \mbox{otherwise}.
\end{cases}
</math>
Let <math>X=\sum_{S\in{V\choose 4}}X_S</math> be the total number of 4-cliques in <math>G</math>.


It is sufficient to prove the following lemma.
And there is a reason that we do not need to store <math>v_{n-1}</math> in the Prüfer code.
{{Theorem|Lemma|
{{Theorem|Lemma 2|
*If <math>p=o(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 0</math> as <math>n\rightarrow\infty</math>.
:It always holds that <math>v_{n-1}=n</math>.
*If <math>p=\omega(n^{-2/3})</math>, then <math>\Pr[X\ge 1]\rightarrow 1</math> as <math>n\rightarrow\infty</math>.
}}
}}
{{Proof|
{{Proof|
The first claim is proved by the first moment (expectation and Markov's inequality) and the second claim is proved by the second moment method (Chebyshev's inequality).
Every tree (of at least two vertices) has at least two leaves. The <math>u_i</math>, <math>1\le i\le n-1</math>, are the leaf of the smallest label in <math>T_i</math>, which can never be <math>n</math>, thus <math>n</math> is never deleted.
 
Every 4-clique has 6 edges, thus for any <math>S\in{V\choose 4}</math>,
:<math>\mathbf{E}[X_S]=\Pr[X_S=1]=p^6</math>.
By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{S\in{V\choose 4}}\mathbf{E}[X_S]={n\choose 4}p^6</math>.
Applying Markov's inequality
:<math>\Pr[X\ge 1]\le \mathbf{E}[X]=O(n^4p^6)=o(1)</math>, if <math>p=o(n^{-2/3})</math>.
The first claim is proved.
 
To prove the second claim, it is equivalent to show that <math>\Pr[X=0]=o(1)</math> if <math>p=\omega(n^{-2/3})</math>. By the Chebyshev's inequality,
:<math>\Pr[X=0]\le\Pr[|X-\mathbf{E}[X]|\ge\mathbf{E}[X]]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}</math>,
where the variance is computed as
:<math>\mathbf{Var}[X]=\mathbf{Var}\left[\sum_{S\in{V\choose 4}}X_S\right]=\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]+\sum_{S,T\in{V\choose 4}, S\neq T}\mathbf{Cov}(X_S,X_T)</math>.
For any <math>S\in{V\choose 4}</math>,
:<math>\mathbf{Var}[X_S]=\mathbf{E}[X_S^2]-\mathbf{E}[X_S]^2\le \mathbf{E}[X_S^2]=\mathbf{E}[X_S]=p^6</math>. Thus the first term of above formula is <math>\sum_{S\in{V\choose 4}}\mathbf{Var}[X_S]=O(n^4p^6)</math>.
 
We now compute the covariances. For any <math>S,T\in{V\choose 4}</math> that <math>S\neq T</math>:
* Case.1: <math>|S\cap T|\le 1</math>, so <math>S</math> and <math>T</math> do not share any edges. <math>X_S</math> and <math>X_T</math> are independent, thus <math>\mathbf{Cov}(X_S,X_T)=0</math>.
* Case.2: <math>|S\cap T|= 2</math>, so <math>S</math> and <math>T</math> share an edge. Since <math>|S\cup T|=6</math>, there are <math>{n\choose 6}=O(n^6)</math> pairs of such <math>S</math> and <math>T</math>.
::<math>\mathbf{Cov}(X_S,X_T)=\mathbf{E}[X_SX_T]-\mathbf{E}[X_S]\mathbf{E}[X_T]\le\mathbf{E}[X_SX_T]=\Pr[X_S=1\wedge X_T=1]=p^{11}</math>
:since there are 11 edges in the union of two 4-cliques that share a common edge. The contribution of these pairs is <math>O(n^6p^{11})</math>.
* Case.2: <math>|S\cap T|= 3</math>, so <math>S</math> and <math>T</math> share a triangle. Since <math>|S\cup T|=5</math>, there are <math>{n\choose 5}=O(n^5)</math> pairs of such <math>S</math> and <math>T</math>. By the same argument,
::<math>\mathbf{Cov}(X_S,X_T)\le\Pr[X_S=1\wedge X_T=1]=p^{9}</math>
:since there are 9 edges in the union of two 4-cliques that share a triangle. The contribution of these pairs is <math>O(n^5p^{9})</math>.
Putting all these together,
:<math>\mathbf{Var}[X]=O(n^4p^6+n^6p^{11}+n^5p^{9}).</math>
And
:<math>\Pr[X=0]\le\frac{\mathbf{Var}[X]}{(\mathbf{E}[X])^2}=O(n^{-4}p^{-6}+n^{-2}p^{-1}+n^{-3}p^{-3})</math>,
which is <math>o(1)</math> if <math>p=\omega(n^{-2/3})</math>. The second claim is also proved.
}}
}}


=== Threshold for balanced subgraphs ===
Lemma 1 and 2 together imply that given a Prüfer code <math>(v_1,v_2,\ldots,v_{n-2})</math>, the only remaining task to reconstruct the tree <math>T</math> is to figure out those <math>u_i</math>, <math>1\le i\le n-1</math>. The following lemma state how to obtain <math>u_i</math>, <math>1\le i\le n-1</math>, from a Prüfer code <math>(v_1,v_2,\ldots,v_{n-2})</math>.
The above theorem can be generalized to any "balanced" subgraphs.
{{Theorem|Definition|
* The '''density''' of a graph <math>G(V,E)</math>, denoted <math>\rho(G)\,</math>, is defined as <math>\rho(G)=\frac{|E|}{|V|}</math>.
* A graph <math>G(V,E)</math> is '''balanced''' if <math>\rho(H)\le \rho(G)</math> for all subgraphs <math>H</math> of <math>G</math>.
}}
Cliques are balanced, because <math>\frac{{k\choose 2}}{k}\le \frac{{n\choose 2}}{n}</math> for any <math>k\le n</math>. The threshold for 4-clique is a direct corollary of the following general theorem.


{{Theorem|Theorem (Erdős–Rényi 1960)|
{{Theorem|Lemma 3|
:Let <math>H</math> be a balanced graph with <math>k</math> vertices and <math>\ell</math> edges. The threshold for the property that a random graph <math>G(n,p)</math> contains a (not necessarily induced) subgraph isomorphic to <math>H</math> is <math>p=n^{-k/\ell}\,</math>.
:For <math>i=1,2,\ldots,n-1</math>, <math>u_i</math> is the smallest element of <math>\{1,2,\ldots,n\}</math> not in <math>\{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\}</math>.
}}
}}
{{Prooftitle|Sketch of proof.|
{{Proof|
For any <math>S\in{V\choose k}</math>, let <math>X_S</math> indicate whether <math>G_S</math> (the subgraph of <math>G</math> induced by <math>S</math>) contain a subgraph <math>H</math>. Then
Note that <math>u_1,u_2,\ldots,u_{n-1},v_{n-1}</math> is a sequence of distinct vertices, because <math>u_1,u_2,\ldots,u_{n-1}</math> are deleted one by one from the tree, and <math>v_{n-1}=n</math> is never deleted. Thus, each vertex <math>v</math> appears among <math>u_1,u_2,\ldots,u_{n-1},v_{n-1}</math> exactly once. And each vertex <math>v</math> appears for <math>deg(v)</math> times among the edges <math>\{u_i,v_i\}</math>, <math>1\le i\le n-1</math>, where <math>deg(v)</math> denotes the degree of vertex <math>v</math> in the original tree <math>T</math>. Therefore, each vertex <math>v</math> appears among  <math>v_1,v_2,\ldots,v_{n-2}</math> for <math>deg(v)-1</math> times.
:<math>p^{\ell}\le\mathbf{E}[X_S]\le k!p^{\ell}</math>, since there are at most <math>k!</math> ways to match the substructure.
Note that <math>k</math> does not depend on <math>n</math>. Thus, <math>\mathbf{E}[X_S]=\Theta(p^{\ell})</math>. Let <math>X=\sum_{S\in{V\choose k}}X_S</math> be the number of <math>H</math>-subgraphs.
:<math>\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>.
 
By Markov's inequality, <math>\Pr[X\ge 1]\le \mathbf{E}[X]=\Theta(n^kp^{\ell})</math> which is <math>o(1)</math> when <math>p\ll n^{-\ell/k}</math>.
 
By Chebyshev's inequality, <math>\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}</math> where
:<math>\mathbf{Var}[X]=\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]+\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)</math>.
The first term <math>\sum_{S\in{V\choose k}}\mathbf{Var}[X_S]\le \sum_{S\in{V\choose k}}\mathbf{E}[X_S^2]= \sum_{S\in{V\choose k}}\mathbf{E}[X_S]=\mathbf{E}[X]=\Theta(n^kp^{\ell})</math>.
 
For the covariances, <math>\mathbf{Cov}(X_S,X_T)\neq 0</math> only if <math>|S\cap T|=i</math> for <math>2\le i\le k-1</math>. Note that <math>|S\cap T|=i</math> implies that <math>|S\cup T|=2k-i</math>. And for balanced <math>H</math>, the number of edges of interest in <math>S</math> and <math>T</math> is <math>2\ell-i\rho(H_{S\cap T})\ge 2\ell-i\rho(H)=2\ell-i\ell/k</math>. Thus, <math>\mathbf{Cov}(X_S,X_T)\le\mathbf{E}[X_SX_T]\le p^{2\ell-i\ell/k}</math>. And,
 
:<math>\sum_{S\neq T}\mathbf{Cov}(X_S,X_T)=\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})</math>
Therefore, when <math>p\gg n^{-\ell/k}</math>,
:<math>
\Pr[X=0]\le \frac{\mathbf{Var}[X]}{\mathbf{E}[X]^2}\le \frac{\Theta(n^kp^{\ell})+\sum_{i=2}^{k-1}O(n^{2k-i}p^{2\ell-i\ell/k})}{\Theta(n^{2k}p^{2\ell})}=\Theta(n^{-k}p^{-\ell})+\sum_{i=2}^{k-1}O(n^{-i}p^{-i\ell/k})=o(1)</math>.
}}
 
 
=Chernoff Bound=
 
Suppose that we have a fair coin. If we toss it once, then the outcome is completely unpredictable. But if we toss it, say for 1000 times, then the number of HEADs is very likely to be around 500. This striking phenomenon, illustrated in the right figure, is called the '''concentration'''. The Chernoff bound captures the concentration of independent trials.
 
[[File:Coinflip.png|border|450px|right]]
 
The Chernoff bound is also a tail bound for the sum of independent random variables which may give us ''exponentially'' sharp bounds.
 
Before proving the Chernoff bound, we should talk about the moment generating functions.
 
= Moment generating functions =
The more we know about the moments of a random variable <math>X</math>, the more information we would have about <math>X</math>. There is a so-called '''moment generating function''', which "packs" all the information about the moments of <math>X</math> into one function.


{{Theorem
Similarly, each vertex <math>v</math> of <math>T_i</math> appears among <math>v_i,v_{i+1},\ldots,v_{n-2}</math> for <math>deg_i(v)-1</math> times, where <math>deg_i(v)</math> is the degree of vertex <math>v</math> in the tree <math>T_i</math>. In particular, the leaves of <math>T_i</math> are not among <math>\{v_i,v_{i+1},\ldots,v_{n-2}\}</math>. Recall that the vertices of <math>T_i</math> are <math>u_i,u_{i+1},\ldots,u_{n-1},v_{n-1}</math>. Then the leaves of <math>T_i</math> are the elements of <math>\{1,2,\ldots,n\}</math> not in <math>\{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\}</math>. By definition of Prüfer code, <math>u_i</math> is the leaf in <math>T_i</math> of smallest label, hence the smallest element of <math>\{1,2,\ldots,n\}</math> not in <math>\{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\}</math>.
|Definition|
:The moment generating function of a random variable <math>X</math> is defined as <math>\mathbf{E}\left[\mathrm{e}^{\lambda X}\right]</math> where <math>\lambda</math> is the parameter of the function.
}}
}}


By Taylor's expansion and the linearity of expectations,
Applying Lemma 3, we have the following decoder for the Prüfer code:
:<math>\begin{align}
{{Theorem| Prüfer code (decoder)|
\mathbf{E}\left[\mathrm{e}^{\lambda X}\right]
:'''Input''': A tuple <math>(v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2}</math>.
&=
:
\mathbf{E}\left[\sum_{k=0}^\infty\frac{\lambda^k}{k!}X^k\right]\\
:let <math>T</math> be empty graph, and <math>v_{n-1}=n</math>;
&=\sum_{k=0}^\infty\frac{\lambda^k}{k!}\mathbf{E}\left[X^k\right]
:for <math>i=1</math> to <math>n-1</math>, do
\end{align}</math>  
::let <math>u_i</math> be the smallest label not in <math>\{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\}</math>;
 
::add an edge <math>\{u_i,v_i\}</math> to <math>T</math>;
The moment generating function <math>\mathbf{E}\left[\mathrm{e}^{\lambda X}\right]</math> is a function of <math>\lambda</math>.
:end
 
:return <math>T</math>;
= The Chernoff bound =
The Chernoff bounds are exponentially sharp tail inequalities for the sum of independent trials.
The bounds are obtained by applying Markov's inequality to the moment generating function of the sum of independent trials, with some  appropriate choice of the parameter <math>\lambda</math>.
{{Theorem
|Chernoff bound (the upper tail)|
:Let  <math>X=\sum_{i=1}^n X_i</math>, where <math>X_1, X_2, \ldots, X_n</math> are independent Poisson trials. Let <math>\mu=\mathbf{E}[X]</math>.
:Then for any <math>\delta>0</math>,
::<math>\Pr[X\ge (1+\delta)\mu]\le\left(\frac{e^{\delta}}{(1+\delta)^{(1+\delta)}}\right)^{\mu}.</math>
}}
}}
{{Proof| For any <math>\lambda>0</math>, <math>X\ge (1+\delta)\mu</math> is equivalent to that <math>e^{\lambda X}\ge e^{\lambda (1+\delta)\mu}</math>, thus
:<math>\begin{align}
\Pr[X\ge (1+\delta)\mu]
&=
\Pr\left[e^{\lambda X}\ge e^{\lambda (1+\delta)\mu}\right]\\
&\le
\frac{\mathbf{E}\left[e^{\lambda X}\right]}{e^{\lambda (1+\delta)\mu}},
\end{align}</math>
where the last step follows by Markov's inequality.


Computing the moment generating function <math>\mathbf{E}[e^{\lambda X}]</math>:
In other words, the encoding of trees to tuples by the Prüfer code is reversible, thus the mapping is injective (1-1). To see it is also surjective, we need to show that for every possible <math>(v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2}</math>, the above decoder recovers a tree from it.
:<math>\begin{align}
\mathbf{E}\left[e^{\lambda X}\right]
&=
\mathbf{E}\left[e^{\lambda \sum_{i=1}^n X_i}\right]\\
&=
\mathbf{E}\left[\prod_{i=1}^n e^{\lambda X_i}\right]\\
&=
\prod_{i=1}^n \mathbf{E}\left[e^{\lambda X_i}\right].
& (\mbox{for independent random variables})
\end{align}</math>


Let <math>p_i=\Pr[X_i=1]</math> for <math>i=1,2,\ldots,n</math>. Then,
It is easy to see that the decoder always returns a graph of <math>n-1</math> edges on the <math>n</math> vertices. The only thing remaining to verify is that the returned graph has no cycle in it, which can be easily proved by a timeline argument (left as an exercise).
:<math>\mu=\mathbf{E}[X]=\mathbf{E}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{E}[X_i]=\sum_{i=1}^n p_i</math>.


We bound the moment generating function for each individual <math>X_i</math> as follows.
Therefore, the Prüfer code establishes a bijection between the set of trees on <math>n</math> distinct vertices and the tuples from <math>\{1,2,\ldots,n\}^{n-2}</math>. This proves Cayley's formula.
:<math>\begin{align}
\mathbf{E}\left[e^{\lambda X_i}\right]
&=
p_i\cdot e^{\lambda\cdot 1}+(1-p_i)\cdot e^{\lambda\cdot 0}\\
&=
1+p_i(e^\lambda -1)\\
&\le
e^{p_i(e^\lambda-1)},
\end{align}</math>
where in the last step we apply the Taylor's expansion so that <math>e^y\ge 1+y</math> where <math>y=p_i(e^\lambda-1)\ge 0</math>. (By doing this, we can transform the product to the sum of <math>p_i</math>, which is <math>\mu</math>.)


Therefore,
== Double counting ==
:<math>\begin{align}
We now present a proof of the Cayley's formula by double counting, which is considered by the [http://en.wikipedia.org/wiki/Proofs_from_THE_BOOK Proofs from THE BOOK] "the most beautiful of them all".
\mathbf{E}\left[e^{\lambda X}\right]
{{Prooftitle|Proof of Cayley's formula by double counting|
&=
(Due to Pitman 1999)
\prod_{i=1}^n \mathbf{E}\left[e^{\lambda X_i}\right]\\
&\le
\prod_{i=1}^n e^{p_i(e^\lambda-1)}\\
&=
\exp\left(\sum_{i=1}^n p_i(e^{\lambda}-1)\right)\\
&=
e^{(e^\lambda-1)\mu}.
\end{align}</math>
Thus, we have shown that for any <math>\lambda>0</math>,
:<math>\begin{align}
\Pr[X\ge (1+\delta)\mu]  
&\le
\frac{\mathbf{E}\left[e^{\lambda X}\right]}{e^{\lambda (1+\delta)\mu}}\\
&\le
\frac{e^{(e^\lambda-1)\mu}}{e^{\lambda (1+\delta)\mu}}\\
&=
\left(\frac{e^{(e^\lambda-1)}}{e^{\lambda (1+\delta)}}\right)^\mu
\end{align}</math>.
For any <math>\delta>0</math>, we can let <math>\lambda=\ln(1+\delta)>0</math> to get
:<math>\Pr[X\ge (1+\delta)\mu]\le\left(\frac{e^{\delta}}{(1+\delta)^{(1+\delta)}}\right)^{\mu}.</math>
}}


The idea of the proof is actually quite clear: we apply Markov's inequality to <math>e^{\lambda X}</math> and for the rest, we just estimate the moment generating function <math>\mathbf{E}[e^{\lambda X}]</math>. To make the bound as tight as possible, we minimized the <math>\frac{e^{(e^\lambda-1)}}{e^{\lambda (1+\delta)}}</math> by setting <math>\lambda=\ln(1+\delta)</math>, which can be justified by taking derivatives of <math>\frac{e^{(e^\lambda-1)}}{e^{\lambda (1+\delta)}}</math>.
Let <math>T_n</math> be the number of different trees defined on <math>n</math> distinct vertices.


----
A '''rooted tree''' is a tree with a special vertex. That is, one of the <math>n</math> vertices is marked as the "root" of the tree. A rooted tree defines a natural direction of all edges, such that an edge <math>uv</math> of the tree is directed from <math>u</math> to <math>v</math> if <math>u</math> is before <math>v</math> along the unique path from the root.


We then proceed to the lower tail, the probability that the random variable deviates below the mean value:
We count the number of different ''sequences'' of directed edges that can be added to an empty graph on <math>n</math> vertices to form from it a ''rooted'' tree. We note that such a sequence can be formed in two ways:
# Starting with an unrooted tree, choose one of its vertices as root, and fix an total order of edges to specify the order in which the edges are added.
# Starting from an empty graph, add the edges one by one in steps.


{{Theorem
In the first method, we pick one of the <math>T_n</math> unrooted trees, choose one of the <math>n</math> vertices as the root, and pick one of the <math>(n-1)!</math> total orders of the <math>n-1</math> edges. This gives us <math>T_nn(n-1)!=T_nn!</math> ways.
|Chernoff bound (the lower tail)|
:Let  <math>X=\sum_{i=1}^n X_i</math>, where <math>X_1, X_2, \ldots, X_n</math> are independent Poisson trials. Let <math>\mu=\mathbf{E}[X]</math>.
:Then for any <math>0<\delta<1</math>,
::<math>\Pr[X\le (1-\delta)\mu]\le\left(\frac{e^{-\delta}}{(1-\delta)^{(1-\delta)}}\right)^{\mu}.</math>
}}
{{Proof| For any <math>\lambda<0</math>, by the same analysis as in the upper tail version,
:<math>\begin{align}
\Pr[X\le (1-\delta)\mu]
&=
\Pr\left[e^{\lambda X}\ge e^{\lambda (1-\delta)\mu}\right]\\
&\le
\frac{\mathbf{E}\left[e^{\lambda X}\right]}{e^{\lambda (1-\delta)\mu}}\\
&\le
\left(\frac{e^{(e^\lambda-1)}}{e^{\lambda (1-\delta)}}\right)^\mu.
\end{align}</math>  
For any <math>0<\delta<1</math>, we can let <math>\lambda=\ln(1-\delta)<0</math> to get
:<math>\Pr[X\ge (1-\delta)\mu]\le\left(\frac{e^{-\delta}}{(1-\delta)^{(1-\delta)}}\right)^{\mu}.</math>
}}


----
In the second method, we consider the number of choices in one step, and multiply the numbers of choices in all steps. This is done as follows.


Some useful special forms of the bounds can be derived directly from the above general forms of the bounds. We now know better why we say that the bounds are exponentially sharp.
Given a sequence of ''adding'' <math>n-1</math> edges to an empty graph to form a rooted tree, we reverse this sequence and get a sequence of ''removing'' edges one by one from the final rooted tree until no edge left. We observe that:
* At first, we remove an edge from the rooted tree. Suppose that the root of the tree is <math>r</math>, and the removed directed edge is <math>(u,v)</math>.  After removing <math>(u,v)</math>, the original rooted tree is disconnected into two rooted trees, one rooted at <math>r</math> and the other rooted at <math>v</math>.
* After removing <math>k-1</math> edges, there are <math>k</math> rooted trees. In the <math>k</math>th step, a directed edge <math>(u,v)</math> in the current forest is removed and the tree containing <math>(u,v)</math> is disconnected into two trees, one rooted at the old root of that tree, and the other rooted at <math>v</math>.


{{Theorem
We now again reverse the above procedure, and consider the sequence of adding directed edges to an empty graph to form a rooted tree.
|Useful forms of the Chernoff bound|
* At first, we have <math>n</math> rooted trees, each of 0 edge (<math>n</math> isolated vertices).
:Let  <math>X=\sum_{i=1}^n X_i</math>, where <math>X_1, X_2, \ldots, X_n</math> are independent Poisson trials. Let <math>\mu=\mathbf{E}[X]</math>. Then
* After adding <math>n-k</math> edges, there are <math>k</math> rooted trees. Denoting the directed edge added next as <math>(u,v)</math>. As observed above, <math>u</math> can be any one of the <math>n</math> vertices; but <math>v</math> must be the root of one of the <math>k</math> trees, except the tree which contains <math>u</math>. There are <math>n(k-1)</math> choices of such <math>(u,v)</math>.
:1. for <math>0<\delta\le 1</math>,
Multiplying the numbers of choices in all steps, the number of sequences of adding directed edges to an empty graph to form a rooted tree is given by
::<math>\Pr[X\ge (1+\delta)\mu]<\exp\left(-\frac{\mu\delta^2}{3}\right);</math>
:<math>\prod_{k=2}^nn(k-1)=n^{n-2}n!</math>.
::<math>\Pr[X\le (1-\delta)\mu]<\exp\left(-\frac{\mu\delta^2}{2}\right);</math>
:2. for <math>t\ge 2e\mu</math>,
::<math>\Pr[X\ge t]\le 2^{-t}.</math>
}}
{{Proof| To obtain the bounds in (1), we need to show that for <math>0<\delta< 1</math>, <math>\frac{e^{\delta}}{(1+\delta)^{(1+\delta)}}\le e^{-\delta^2/3}</math> and <math>\frac{e^{-\delta}}{(1-\delta)^{(1-\delta)}}\le e^{-\delta^2/2}</math>. We can verify both inequalities by standard analysis techniques.


To obtain the bound in (2), let <math>t=(1+\delta)\mu</math>. Then <math>\delta=t/\mu-1\ge 2e-1</math>. Hence,
By the principle of double counting, counting the same thing by different methods yield the same result.
:<math>\begin{align}
:<math>T_nn!=n^{n-2}n!</math>,
\Pr[X\ge(1+\delta)\mu]
which gives that <math>T_n=n^{n-2}</math>.
&\le
\left(\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right)^\mu\\
&\le
\left(\frac{e}{1+\delta}\right)^{(1+\delta)\mu}\\
&\le
\left(\frac{e}{2e}\right)^t\\
&\le
2^{-t}
\end{align}</math>
}}
}}


= Balls into bins, revisited =
== Kirchhoff's Matrix-Tree Theorem ==
Throwing <math>m</math> balls uniformly and independently to <math>n</math> bins, what is the maximum load of all bins with high probability? In the last class, we gave an analysis of this problem by using a counting argument.
{To be added}
 
Now we give a more "advanced" analysis by using Chernoff bounds.
 
 
For any <math>i\in[n]</math> and <math>j\in[m]</math>, let <math>X_{ij}</math> be the indicator variable for the event that ball <math>j</math> is thrown to bin <math>i</math>. Obviously
:<math>\mathbf{E}[X_{ij}]=\Pr[\mbox{ball }j\mbox{ is thrown to bin }i]=\frac{1}{n}</math>
Let <math>Y_i=\sum_{j\in[m]}X_{ij}</math> be the load of bin <math>i</math>.
 
 
Then the expected load of bin <math>i</math> is
 
<math>(*)\qquad  \mu=\mathbf{E}[Y_i]=\mathbf{E}\left[\sum_{j\in[m]}X_{ij}\right]=\sum_{j\in[m]}\mathbf{E}[X_{ij}]=m/n.  </math>
 
For the case <math>m=n</math>, it holds that <math>\mu=1</math>
 
Note that <math>Y_i</math> is a sum of <math>m</math> mutually independent indicator variable. Applying Chernoff bound, for any particular bin <math>i\in[n]</math>,
:<math>
\Pr[Y_i>(1+\delta)\mu] \le \left(\frac{e^{\delta}}{(1+\delta)^{1+\delta}}\right)^\mu.
</math>
 
== When <math>m=n</math> ==
 
When <math>m=n</math>, <math>\mu=1</math>. Write <math>c=1+\delta</math>. The above bound can be written as
:<math>
\Pr[Y_i>c] \le \frac{e^{c-1}}{c^c}.
</math>
 
Let <math>c=\frac{e\ln n}{\ln\ln n}</math>, we evaluate <math>\frac{e^{c-1}}{c^c}</math> by taking logarithm to its reciprocal.
:<math>
\begin{align}
\ln\left(\frac{c^c}{e^{c-1}}\right)
&=
c\ln c-c+1\\
&=
c(\ln c-1)+1\\
&=
\frac{e\ln n}{\ln\ln n}\left(\ln\ln n-\ln\ln\ln n\right)+1\\
&\ge
\frac{e\ln n}{\ln\ln n}\cdot\frac{2}{e}\ln\ln n+1\\
&\ge
2\ln n.
\end{align}
</math>
Thus,
:<math>
\Pr\left[Y_i>\frac{e\ln n}{\ln\ln n}\right] \le \frac{1}{n^2}.
</math>
 
Applying the union bound, the probability that there exists a bin with load <math>>12\ln n</math> is
:<math>n\cdot \Pr\left[Y_1>\frac{e\ln n}{\ln\ln n}\right] \le \frac{1}{n}</math>.
Therefore, for <math>m=n</math>, with high probability, the maximum load is <math>O\left(\frac{e\ln n}{\ln\ln n}\right)</math>.
 
== For larger <math>m</math> ==
When <math>m\ge n\ln n</math>, then according to <math>(*)</math>, <math>\mu=\frac{m}{n}\ge \ln n</math>
 
We can apply an easier form of the Chernoff bounds,
:<math>
\Pr[Y_i\ge 2e\mu]\le 2^{-2e\mu}\le 2^{-2e\ln n}<\frac{1}{n^2}.
</math>
By the union bound, the probability that there exists a bin with load <math>\ge 2e\frac{m}{n}</math> is,
:<math>n\cdot \Pr\left[Y_1>2e\frac{m}{n}\right] = n\cdot \Pr\left[Y_1>2e\mu\right]\le \frac{1}{n}</math>.
Therefore, for <math>m\ge n\ln n</math>, with high probability, the maximum load is <math>O\left(\frac{m}{n}\right)</math>.

Revision as of 06:49, 2 April 2013

Cayley's Formula

We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to Cayley in 1889. The theorem is often referred by the name Cayley's formula.

Cayley's formula for trees
There are [math]\displaystyle{ n^{n-2} }[/math] different trees on [math]\displaystyle{ n }[/math] distinct vertices.

The theorem has several proofs. Classical methods include the bijection which encodes a tree by a Prüfer code, through the Kirchhoff's matrix tree theorem, and by double counting.

Prüfer code

The Prüfer code encodes a labeled tree to a sequence of labels. This gives a bijections between trees and tuples.

In a tree, the vertices of degree 1 are called leaves. It is easy to see that:

  • each tree has at least two leaves; and
  • after removing a leaf (along with the edge adjacent to it) from a tree, the resulting graph is still a tree.

The following algorithm transforms a tree [math]\displaystyle{ T }[/math] of [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ 1,2,\ldots,n }[/math], to a tuple [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2} }[/math].

Prüfer code (encoder)
Input: A tree [math]\displaystyle{ T }[/math] of [math]\displaystyle{ n }[/math] distinct vertices, labeled by [math]\displaystyle{ 1,2,\ldots,n }[/math].
let [math]\displaystyle{ T_1=T }[/math];
for [math]\displaystyle{ i=1 }[/math] to [math]\displaystyle{ n-1 }[/math], do
let [math]\displaystyle{ u_i }[/math] be the leaf in [math]\displaystyle{ T_i }[/math] with the smallest label, and [math]\displaystyle{ v_i }[/math] be its neighbor;
let [math]\displaystyle{ T_{i+1} }[/math] be the new tree obtained from deleting the leaf [math]\displaystyle{ u_i }[/math] from [math]\displaystyle{ T_i }[/math];
end
return [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2}) }[/math];

It is trivial to observe the following lemma:

Lemma 1
For each [math]\displaystyle{ 1\le i\le n-1 }[/math], [math]\displaystyle{ T_i }[/math] is a tree of [math]\displaystyle{ n-i+1 }[/math] vertices. In particular, the vertices of [math]\displaystyle{ T_i }[/math] are [math]\displaystyle{ u_i,u_{i+1},\ldots,u_{n-1},v_{n-1} }[/math], and the edges of [math]\displaystyle{ T_i }[/math] are precisely [math]\displaystyle{ \{u_j,v_j\} }[/math], [math]\displaystyle{ i\le j\le n-1 }[/math].

And there is a reason that we do not need to store [math]\displaystyle{ v_{n-1} }[/math] in the Prüfer code.

Lemma 2
It always holds that [math]\displaystyle{ v_{n-1}=n }[/math].
Proof.

Every tree (of at least two vertices) has at least two leaves. The [math]\displaystyle{ u_i }[/math], [math]\displaystyle{ 1\le i\le n-1 }[/math], are the leaf of the smallest label in [math]\displaystyle{ T_i }[/math], which can never be [math]\displaystyle{ n }[/math], thus [math]\displaystyle{ n }[/math] is never deleted.

[math]\displaystyle{ \square }[/math]

Lemma 1 and 2 together imply that given a Prüfer code [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2}) }[/math], the only remaining task to reconstruct the tree [math]\displaystyle{ T }[/math] is to figure out those [math]\displaystyle{ u_i }[/math], [math]\displaystyle{ 1\le i\le n-1 }[/math]. The following lemma state how to obtain [math]\displaystyle{ u_i }[/math], [math]\displaystyle{ 1\le i\le n-1 }[/math], from a Prüfer code [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2}) }[/math].

Lemma 3
For [math]\displaystyle{ i=1,2,\ldots,n-1 }[/math], [math]\displaystyle{ u_i }[/math] is the smallest element of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] not in [math]\displaystyle{ \{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\} }[/math].
Proof.

Note that [math]\displaystyle{ u_1,u_2,\ldots,u_{n-1},v_{n-1} }[/math] is a sequence of distinct vertices, because [math]\displaystyle{ u_1,u_2,\ldots,u_{n-1} }[/math] are deleted one by one from the tree, and [math]\displaystyle{ v_{n-1}=n }[/math] is never deleted. Thus, each vertex [math]\displaystyle{ v }[/math] appears among [math]\displaystyle{ u_1,u_2,\ldots,u_{n-1},v_{n-1} }[/math] exactly once. And each vertex [math]\displaystyle{ v }[/math] appears for [math]\displaystyle{ deg(v) }[/math] times among the edges [math]\displaystyle{ \{u_i,v_i\} }[/math], [math]\displaystyle{ 1\le i\le n-1 }[/math], where [math]\displaystyle{ deg(v) }[/math] denotes the degree of vertex [math]\displaystyle{ v }[/math] in the original tree [math]\displaystyle{ T }[/math]. Therefore, each vertex [math]\displaystyle{ v }[/math] appears among [math]\displaystyle{ v_1,v_2,\ldots,v_{n-2} }[/math] for [math]\displaystyle{ deg(v)-1 }[/math] times.

Similarly, each vertex [math]\displaystyle{ v }[/math] of [math]\displaystyle{ T_i }[/math] appears among [math]\displaystyle{ v_i,v_{i+1},\ldots,v_{n-2} }[/math] for [math]\displaystyle{ deg_i(v)-1 }[/math] times, where [math]\displaystyle{ deg_i(v) }[/math] is the degree of vertex [math]\displaystyle{ v }[/math] in the tree [math]\displaystyle{ T_i }[/math]. In particular, the leaves of [math]\displaystyle{ T_i }[/math] are not among [math]\displaystyle{ \{v_i,v_{i+1},\ldots,v_{n-2}\} }[/math]. Recall that the vertices of [math]\displaystyle{ T_i }[/math] are [math]\displaystyle{ u_i,u_{i+1},\ldots,u_{n-1},v_{n-1} }[/math]. Then the leaves of [math]\displaystyle{ T_i }[/math] are the elements of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] not in [math]\displaystyle{ \{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\} }[/math]. By definition of Prüfer code, [math]\displaystyle{ u_i }[/math] is the leaf in [math]\displaystyle{ T_i }[/math] of smallest label, hence the smallest element of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] not in [math]\displaystyle{ \{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\} }[/math].

[math]\displaystyle{ \square }[/math]

Applying Lemma 3, we have the following decoder for the Prüfer code:

Prüfer code (decoder)
Input: A tuple [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2} }[/math].
let [math]\displaystyle{ T }[/math] be empty graph, and [math]\displaystyle{ v_{n-1}=n }[/math];
for [math]\displaystyle{ i=1 }[/math] to [math]\displaystyle{ n-1 }[/math], do
let [math]\displaystyle{ u_i }[/math] be the smallest label not in [math]\displaystyle{ \{u_1,\ldots,u_{i-1}\}\cup\{v_i,\ldots,v_{n-1}\} }[/math];
add an edge [math]\displaystyle{ \{u_i,v_i\} }[/math] to [math]\displaystyle{ T }[/math];
end
return [math]\displaystyle{ T }[/math];

In other words, the encoding of trees to tuples by the Prüfer code is reversible, thus the mapping is injective (1-1). To see it is also surjective, we need to show that for every possible [math]\displaystyle{ (v_1,v_2,\ldots,v_{n-2})\in\{1,2,\ldots,n\}^{n-2} }[/math], the above decoder recovers a tree from it.

It is easy to see that the decoder always returns a graph of [math]\displaystyle{ n-1 }[/math] edges on the [math]\displaystyle{ n }[/math] vertices. The only thing remaining to verify is that the returned graph has no cycle in it, which can be easily proved by a timeline argument (left as an exercise).

Therefore, the Prüfer code establishes a bijection between the set of trees on [math]\displaystyle{ n }[/math] distinct vertices and the tuples from [math]\displaystyle{ \{1,2,\ldots,n\}^{n-2} }[/math]. This proves Cayley's formula.

Double counting

We now present a proof of the Cayley's formula by double counting, which is considered by the Proofs from THE BOOK "the most beautiful of them all".

Proof of Cayley's formula by double counting

(Due to Pitman 1999)

Let [math]\displaystyle{ T_n }[/math] be the number of different trees defined on [math]\displaystyle{ n }[/math] distinct vertices.

A rooted tree is a tree with a special vertex. That is, one of the [math]\displaystyle{ n }[/math] vertices is marked as the "root" of the tree. A rooted tree defines a natural direction of all edges, such that an edge [math]\displaystyle{ uv }[/math] of the tree is directed from [math]\displaystyle{ u }[/math] to [math]\displaystyle{ v }[/math] if [math]\displaystyle{ u }[/math] is before [math]\displaystyle{ v }[/math] along the unique path from the root.

We count the number of different sequences of directed edges that can be added to an empty graph on [math]\displaystyle{ n }[/math] vertices to form from it a rooted tree. We note that such a sequence can be formed in two ways:

  1. Starting with an unrooted tree, choose one of its vertices as root, and fix an total order of edges to specify the order in which the edges are added.
  2. Starting from an empty graph, add the edges one by one in steps.

In the first method, we pick one of the [math]\displaystyle{ T_n }[/math] unrooted trees, choose one of the [math]\displaystyle{ n }[/math] vertices as the root, and pick one of the [math]\displaystyle{ (n-1)! }[/math] total orders of the [math]\displaystyle{ n-1 }[/math] edges. This gives us [math]\displaystyle{ T_nn(n-1)!=T_nn! }[/math] ways.

In the second method, we consider the number of choices in one step, and multiply the numbers of choices in all steps. This is done as follows.

Given a sequence of adding [math]\displaystyle{ n-1 }[/math] edges to an empty graph to form a rooted tree, we reverse this sequence and get a sequence of removing edges one by one from the final rooted tree until no edge left. We observe that:

  • At first, we remove an edge from the rooted tree. Suppose that the root of the tree is [math]\displaystyle{ r }[/math], and the removed directed edge is [math]\displaystyle{ (u,v) }[/math]. After removing [math]\displaystyle{ (u,v) }[/math], the original rooted tree is disconnected into two rooted trees, one rooted at [math]\displaystyle{ r }[/math] and the other rooted at [math]\displaystyle{ v }[/math].
  • After removing [math]\displaystyle{ k-1 }[/math] edges, there are [math]\displaystyle{ k }[/math] rooted trees. In the [math]\displaystyle{ k }[/math]th step, a directed edge [math]\displaystyle{ (u,v) }[/math] in the current forest is removed and the tree containing [math]\displaystyle{ (u,v) }[/math] is disconnected into two trees, one rooted at the old root of that tree, and the other rooted at [math]\displaystyle{ v }[/math].

We now again reverse the above procedure, and consider the sequence of adding directed edges to an empty graph to form a rooted tree.

  • At first, we have [math]\displaystyle{ n }[/math] rooted trees, each of 0 edge ([math]\displaystyle{ n }[/math] isolated vertices).
  • After adding [math]\displaystyle{ n-k }[/math] edges, there are [math]\displaystyle{ k }[/math] rooted trees. Denoting the directed edge added next as [math]\displaystyle{ (u,v) }[/math]. As observed above, [math]\displaystyle{ u }[/math] can be any one of the [math]\displaystyle{ n }[/math] vertices; but [math]\displaystyle{ v }[/math] must be the root of one of the [math]\displaystyle{ k }[/math] trees, except the tree which contains [math]\displaystyle{ u }[/math]. There are [math]\displaystyle{ n(k-1) }[/math] choices of such [math]\displaystyle{ (u,v) }[/math].

Multiplying the numbers of choices in all steps, the number of sequences of adding directed edges to an empty graph to form a rooted tree is given by

[math]\displaystyle{ \prod_{k=2}^nn(k-1)=n^{n-2}n! }[/math].

By the principle of double counting, counting the same thing by different methods yield the same result.

[math]\displaystyle{ T_nn!=n^{n-2}n! }[/math],

which gives that [math]\displaystyle{ T_n=n^{n-2} }[/math].

[math]\displaystyle{ \square }[/math]

Kirchhoff's Matrix-Tree Theorem

{To be added}