组合数学 (Fall 2011)/The probabilistic method: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
(Created page with '== The Probabilistic Method == The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probab…')
 
imported>WikiSysop
No edit summary
Line 155: Line 155:
There exists an independent set which contains at least <math>\frac{n^2}{4m}</math> vertices.
There exists an independent set which contains at least <math>\frac{n^2}{4m}</math> vertices.
}}
}}
=== Coloring large-girth graphs ===
The girth of a graph is the length of the shortest cycle of the graph.
{{Theorem|Definition|
Let <math>G(V,E)</math> be an undirected graph.
* A '''cycle''' of length <math>k</math> in <math>G</math> is a sequence of distinct vertices <math>v_1,v_2,\ldots,v_{k}</math> such that <math>v_iv_{i+1}\in E</math> for all <math>i=1,2,\ldots,k-1</math> and <math>v_kv_1\in E</math>.
* The '''girth''' of <math>G</math>, dented <math>g(G)</math>, is the length of the shortest cycle in <math>G</math>.
}}
The chromatic number of a graph is the minimum number of colors with which the graph can be ''properly'' colored.
{{Theorem|Definition (chromatic number)|
* The '''chromatic number''' of <math>G</math>, denoted <math>\chi(G)</math>, is the minimal number of colors which we need to color the vertices of <math>G</math> so that no two adjacent vertices have the same color. Formally,
::<math>\chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\}</math>.
}}
In 1959, Erdős proved the following theorem: for any fixed <math>k</math> and <math>\ell</math>, there exists a finite graph with girth at least <math>k</math> and chromatic number at least <math>\ell</math>. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.
{{Theorem| Theorem (Erdős 1959)|
: For all <math>k,\ell</math> there exists a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k\,</math>.
}}
It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.
{{Theorem|Definition (independence number)|
* The '''independence number''' of <math>G</math>, denoted <math>\alpha(G)</math>, is the size of the largest independent set in <math>G</math>. Formally,
::<math>\alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\}</math>.
}}
We observe the following relationship between the chromatic number and the independence number.
{{Theorem|Lemma|
:For any <math>n</math>-vertex graph,
::<math>\chi(G)\ge\frac{n}{\alpha(G)}</math>.
}}
{{Proof|
*In the optimal coloring, <math>n</math> vertices are partitioned into <math>\chi(G)</math> color classes according to the vertex color.
*Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
*By the pigeonhole principle, there is a color class (hence an independent set) of size <math>\frac{n}{\chi(G)}</math>. Therefore, <math>\alpha(G)\ge\frac{n}{\chi(G)}</math>.
The lemma follows.
}}
Therefore, it is sufficient to prove that <math>\alpha(G)\le\frac{n}{k}</math> and <math>g(G)>\ell</math>.
{{Prooftitle|Proof of Erdős theorem|
Fix <math>\theta<\frac{1}{\ell}</math>. Let <math>G</math> be <math>G(n,p)</math> with <math>p=n^{\theta-1}</math>.
For any length-<math>i</math> simple cycle <math>\sigma</math>, let <math>X_\sigma</math> be the indicator random variable such that
:<math>
X_\sigma=
\begin{cases}
1 & \sigma\mbox{ is a cycle in }G,\\
0 & \mbox{otherwise}.
\end{cases}
</math>
The number of cycles of length at most <math>\ell</math> in graph <math>G</math> is
:<math>X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma</math>.
For any particular length-<math>i</math> simple cycle <math>\sigma</math>,
:<math>\mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i}</math>.
For any <math>3\le i\le n</math>, the number of length-<math>i</math> simple cycle is <math>\frac{n(n-1)\cdots (n-i+1)}{2i!}</math>. By the linearity of expectation,
:<math>\mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n)</math>.
Applying Markov's inequality,
:<math>
\Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1).
</math>
Therefore, with high probability the random graph has less than <math>n/2</math> short cycles.
Now we proceed to analyze the independence number. Let <math>m=\left\lceil\frac{3\ln n}{p}\right\rceil</math>, so that
:<math>
\begin{align}
\Pr[\alpha(G)\ge m]
&\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\
&\le{n\choose m}(1-p)^{m\choose 2}\\
&<n^m\mathrm{e}^{-p{m\choose 2}}\\
&=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1)
\end{align}
</math>
The probability that either of the above events occurs is
:<math>
\begin{align}
\Pr\left[X<\frac{n}{2}\vee \alpha(G)<m\right]
\le \Pr\left[X<\frac{n}{2}\right]+\Pr\left[\alpha(G)<m\right]
=o(1).
\end{align}
</math>
Therefore, there exists a graph <math>G</math> with less than <math>n/2</math> "short" cycles, i.e., cycles of length at most <math>\ell</math>, and with <math>\alpha(G)<m\le 3n^{1-\theta}\ln n</math>.
Take each "short" cycle in <math>G</math> and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph <math>G'</math> which has no short cycles, hence the girth <math>g(G')\ge\ell</math>. And <math>G'</math> has at least <math>n/2</math> vertices, because at most <math>n/2</math> vertices are removed.
Notice that removing vertices cannot makes <math>\alpha(G)</math> grow. It holds that <math>\alpha(G')\le\alpha(G)</math>. Thus
:<math>\chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n}</math>.
The theorem is proved by taking <math>n</math> sufficiently large so that this value is greater than <math>k</math>.
}}
The proof contains a very simple procedure which for any <math>k</math> and <math>\ell</math> ''generates'' such a graph <math>G</math> with <math>g(G)>\ell</math> and <math>\chi(G)>k</math>. The procedure is as such:
* Fix some <math>\theta<\frac{1}{\ell}</math>. Choose sufficiently large <math>n</math> with <math>\frac{n^\theta}{6\ln n}>k</math>, and let <math>p=n^{\theta-1}</math>.
* Generate a random graph <math>G</math> as <math>G(n,p)</math>.
* For each cycle of length at most <math>\ell</math> in <math>G</math>, remove a vertex from the cycle.
The resulting graph <math>G'</math> satisfying that <math>g(G)>\ell</math> and <math>\chi(G)>k</math> with high probability.

Revision as of 08:34, 16 August 2011

The Probabilistic Method

The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.

The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:

  • If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
  • Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
For example, if we know the average height of the students in the class is [math]\displaystyle{ \ell }[/math], then we know there is a students whose height is at least [math]\displaystyle{ \ell }[/math], and there is a student whose height is at most [math]\displaystyle{ \ell }[/math].

Although the idea of the probabilistic method is simple, it provides us a powerful tool for existential proof.

Ramsey number

Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of [math]\displaystyle{ K_6 }[/math] (the complete graph on six vertices), there must be a monochromatic [math]\displaystyle{ K_3 }[/math] (a triangle whose edges have the same color).

Generally, the Ramsey number [math]\displaystyle{ R(k,\ell) }[/math] is the smallest integer [math]\displaystyle{ n }[/math] such that in any two-coloring of the edges of a complete graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ K_n }[/math] by red and blue, either there is a red [math]\displaystyle{ K_k }[/math] or there is a blue [math]\displaystyle{ K_\ell }[/math].

Ramsey showed in 1929 that [math]\displaystyle{ R(k,\ell) }[/math] is finite for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math]. It is extremely hard to compute the exact value of [math]\displaystyle{ R(k,\ell) }[/math]. Here we give a lower bound of [math]\displaystyle{ R(k,k) }[/math] by the probabilistic method.

Theorem (Erdős 1947)
If [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math] then it is possible to color the edges of [math]\displaystyle{ K_n }[/math] with two colors so that there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.
Proof.
Consider a random two-coloring of edges of [math]\displaystyle{ K_n }[/math] obtained as follows:
  • For each edge of [math]\displaystyle{ K_n }[/math], independently flip a fair coin to decide the color of the edge.

For any fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ \mathcal{E}_S }[/math] be the event that the [math]\displaystyle{ K_k }[/math] subgraph induced by [math]\displaystyle{ S }[/math] is monochromatic. There are [math]\displaystyle{ {k\choose 2} }[/math] many edges in [math]\displaystyle{ K_k }[/math], therefore

[math]\displaystyle{ \Pr[\mathcal{E}_S]=2\cdot 2^{-{k\choose 2}}=2^{1-{k\choose 2}}. }[/math]

Since there are [math]\displaystyle{ {n\choose k} }[/math] possible choices of [math]\displaystyle{ S }[/math], by the union bound

[math]\displaystyle{ \Pr[\exists S, \mathcal{E}_S]\le {n\choose k}\cdot\Pr[\mathcal{E}_S]={n\choose k}\cdot 2^{1-{k\choose 2}}. }[/math]

Due to the assumption, [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math], thus there exists a two coloring that none of [math]\displaystyle{ \mathcal{E}_S }[/math] occurs, which means there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ k\ge 3 }[/math] and we take [math]\displaystyle{ n=\lfloor2^{k/2}\rfloor }[/math], then

[math]\displaystyle{ \begin{align} {n\choose k}\cdot 2^{1-{k\choose 2}} &\lt \frac{n^k}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &\le \frac{2^{k^2/2}}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &= \frac{2^{1+\frac{k}{2}}}{k!}\\ &\lt 1. \end{align} }[/math]

By the above theorem, there exists a two-coloring of [math]\displaystyle{ K_n }[/math] that there is no monochromatic [math]\displaystyle{ K_k }[/math]. Therefore, the Ramsey number [math]\displaystyle{ R(k,k)\gt \lfloor2^{k/2}\rfloor }[/math] for all [math]\displaystyle{ k\ge 3 }[/math].

Tournament

A tournament (竞赛图) on a set [math]\displaystyle{ V }[/math] of [math]\displaystyle{ n }[/math] players is an orientation of the edges of the complete graph on the set of vertices [math]\displaystyle{ V }[/math]. Thus for every two distinct vertices [math]\displaystyle{ u,v }[/math] in [math]\displaystyle{ V }[/math], either [math]\displaystyle{ (u,v)\in E }[/math] or [math]\displaystyle{ (v,u)\in E }[/math], but not both.

We can think of the set [math]\displaystyle{ V }[/math] as a set of [math]\displaystyle{ n }[/math] players in which each pair participates in a single match, where [math]\displaystyle{ (u,v) }[/math] is in the tournament iff player [math]\displaystyle{ u }[/math] beats player [math]\displaystyle{ v }[/math].

Definition
We say that a tournament has [math]\displaystyle{ k }[/math]-paradoxical if for every set of [math]\displaystyle{ k }[/math] players there is a player who beats them all.

Is it true for every finite [math]\displaystyle{ k }[/math], there is a [math]\displaystyle{ k }[/math]-paradoxical tournament (on more than [math]\displaystyle{ k }[/math] vertices, of course)? This problem was first raised by Schütte, and as shown by Erdős, can be solved almost trivially by the probabilistic method.

Theorem (Erdős 1963)
If [math]\displaystyle{ {n\choose k}\left(1-2^{-k}\right)^{n-k}\lt 1 }[/math] then there is a tournament on [math]\displaystyle{ n }[/math] vertices that is [math]\displaystyle{ k }[/math]-paradoxical.
Proof.

Consider a uniformly random tournament [math]\displaystyle{ T }[/math] on the set [math]\displaystyle{ V=[n] }[/math]. For every fixed subset [math]\displaystyle{ S\in{V\choose k} }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ A_S }[/math] be the event defined as follows

[math]\displaystyle{ A_S:\, }[/math] there is no vertex in [math]\displaystyle{ V\setminus S }[/math] that beats all vertices in [math]\displaystyle{ S }[/math].

In a uniform random tournament, the orientations of edges are independent. For any [math]\displaystyle{ u\in V\setminus S }[/math],

[math]\displaystyle{ \Pr[u\mbox{ beats all }v\in S]=2^{-k} }[/math].

Therefore, [math]\displaystyle{ \Pr[u\mbox{ does not beats all }v\in S]=1-2^{-k} }[/math] and

[math]\displaystyle{ \Pr[A_S]=\prod_{u\in V\setminus S}\Pr[u\mbox{ does not beats all }v\in S]=(1-2^{-k})^{n-k} }[/math].

It follows that

[math]\displaystyle{ \Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\le \sum_{S\in{V\choose k}}\Pr[A_S]={n\choose k}(1-2^{-k})^{n-k}\lt 1. }[/math]

Therefore,

[math]\displaystyle{ \Pr[\,T\mbox{ is }k\mbox{-paradoxical }]=\Pr\left[\bigwedge_{S\in{V\choose k}}\overline{A_S}\right]=1-\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\gt 0. }[/math]

There is a [math]\displaystyle{ k }[/math]-paradoxical tournament.

[math]\displaystyle{ \square }[/math]

Linearity of expectation

Let [math]\displaystyle{ X }[/math] be a discrete random variable. The expectation of [math]\displaystyle{ X }[/math] is defined as follows.

Definition (Expectation)
The expectation of a discrete random variable [math]\displaystyle{ X }[/math], denoted by [math]\displaystyle{ \mathbf{E}[X] }[/math], is given by
[math]\displaystyle{ \begin{align} \mathbf{E}[X] &= \sum_{x}x\Pr[X=x], \end{align} }[/math]
where the summation is over all values [math]\displaystyle{ x }[/math] in the range of [math]\displaystyle{ X }[/math].

A fundamental fact regarding the expectation is its linearity.

Theorem (Linearity of Expectations)
For any discrete random variables [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math], and any real constants [math]\displaystyle{ a_1, a_2, \ldots, a_n }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i]. \end{align} }[/math]
Hamiltonian paths

The following result of Szele in 1943 is often considered the first use of the probabilistic method.

Theorem (Szele 1943)
There is a tournament on [math]\displaystyle{ n }[/math] players with at least [math]\displaystyle{ n!2^{-(n-1)} }[/math] Hamiltonian paths.
Proof.

Consider the uniform random tournament [math]\displaystyle{ T }[/math] on [math]\displaystyle{ [n] }[/math]. For any permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ [n] }[/math], let [math]\displaystyle{ X_{\pi} }[/math] be the indicator random variable defined as

[math]\displaystyle{ X_{\pi}=\begin{cases} 1 & \forall i\in[n-1], (\pi_i,\pi_{i+1})\in T,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

In other words, [math]\displaystyle{ X_{\pi} }[/math] indicates whether [math]\displaystyle{ \pi_0\rightarrow\pi_1\rightarrow\pi_2\rightarrow\cdots\rightarrow\pi_{n-1} }[/math] gives a Hamiltonian path. It holds that

[math]\displaystyle{ \mathrm{E}[X_\pi]=1\cdot\Pr[X_\pi=1]+0\cdot\Pr[X_\pi=0]=\Pr[\forall i\in[n-1], (\pi_i,\pi_{i+1})\in T]=2^{-(n-1)}. }[/math]

Let [math]\displaystyle{ X=\sum_{\pi:\text{permutation of }[n]}X_\pi\, }[/math]. Clearly [math]\displaystyle{ X }[/math] is the number of Hamiltonian paths in the tournament [math]\displaystyle{ T }[/math]. Due to the linearity of expectation,

[math]\displaystyle{ \mathrm{E}[X]=\mathrm{E}\left[\sum_{\pi:\text{permutation of }[n]}X_\pi\right]=\sum_{\pi:\text{permutation of }[n]}\mathrm{E}[X_\pi]=n!2^{-(n-1)}. }[/math]

This is the average number of Hamiltonian paths in a tournament, where the average is taken over all tournaments. Thus some tournament has at least [math]\displaystyle{ n!2^{-(n-1)} }[/math] Hamiltonian paths.

[math]\displaystyle{ \square }[/math]

Independent sets

An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.

Theorem
Let [math]\displaystyle{ G(V,E) }[/math] be a graph on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ m }[/math] edges. Then [math]\displaystyle{ G }[/math] has an independent set with at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.
Proof.
Let [math]\displaystyle{ S }[/math] be a set of vertices constructed as follows:
For each vertex [math]\displaystyle{ v\in V }[/math]:
  • [math]\displaystyle{ v }[/math] is included in [math]\displaystyle{ S }[/math] independently with probability [math]\displaystyle{ p }[/math],

[math]\displaystyle{ p }[/math] to be determined.

Let [math]\displaystyle{ X=|S| }[/math]. It is obvious that [math]\displaystyle{ \mathbf{E}[X]=np }[/math].

For each edge [math]\displaystyle{ e\in E }[/math], let [math]\displaystyle{ Y_{e} }[/math] be the random variable which indicates whether both endpoints of [math]\displaystyle{ }[/math] are in [math]\displaystyle{ S }[/math].

[math]\displaystyle{ \mathbf{E}[Y_{uv}]=\Pr[u\in S\wedge v\in S]=p^2. }[/math]

Let [math]\displaystyle{ Y }[/math] be the number of edges in the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]. It holds that [math]\displaystyle{ Y=\sum_{e\in E}Y_e }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[Y]=\sum_{e\in E}\mathbf{E}[Y_e]=mp^2 }[/math].

Note that although [math]\displaystyle{ S }[/math] is not necessary an independent set, it can be modified to one if for each edge [math]\displaystyle{ e }[/math] of the induced subgraph [math]\displaystyle{ G(S) }[/math], we delete one of the endpoint of [math]\displaystyle{ e }[/math] from [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ S^* }[/math] be the resulting set. It is obvious that [math]\displaystyle{ S^* }[/math] is an independent set since there is no edge left in the induced subgraph [math]\displaystyle{ G(S^*) }[/math].

Since there are [math]\displaystyle{ Y }[/math] edges in [math]\displaystyle{ G(S) }[/math], there are at most [math]\displaystyle{ Y }[/math] vertices in [math]\displaystyle{ S }[/math] are deleted to make it become [math]\displaystyle{ S^* }[/math]. Therefore, [math]\displaystyle{ |S^*|\ge X-Y }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge\mathbf{E}[X-Y]=\mathbf{E}[X]-\mathbf{E}[Y]=np-mp^2. }[/math]

The expectation is maximized when [math]\displaystyle{ p=\frac{n}{2m} }[/math], thus

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge n\cdot\frac{n}{2m}-m\left(\frac{n}{2m}\right)^2=\frac{n^2}{4m}. }[/math]

There exists an independent set which contains at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.

[math]\displaystyle{ \square }[/math]

Coloring large-girth graphs

The girth of a graph is the length of the shortest cycle of the graph.

Definition

Let [math]\displaystyle{ G(V,E) }[/math] be an undirected graph.

  • A cycle of length [math]\displaystyle{ k }[/math] in [math]\displaystyle{ G }[/math] is a sequence of distinct vertices [math]\displaystyle{ v_1,v_2,\ldots,v_{k} }[/math] such that [math]\displaystyle{ v_iv_{i+1}\in E }[/math] for all [math]\displaystyle{ i=1,2,\ldots,k-1 }[/math] and [math]\displaystyle{ v_kv_1\in E }[/math].
  • The girth of [math]\displaystyle{ G }[/math], dented [math]\displaystyle{ g(G) }[/math], is the length of the shortest cycle in [math]\displaystyle{ G }[/math].

The chromatic number of a graph is the minimum number of colors with which the graph can be properly colored.

Definition (chromatic number)
  • The chromatic number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \chi(G) }[/math], is the minimal number of colors which we need to color the vertices of [math]\displaystyle{ G }[/math] so that no two adjacent vertices have the same color. Formally,
[math]\displaystyle{ \chi(G)=\min\{C\in\mathbb{N}\mid \exists f:V\rightarrow[C]\mbox{ such that }\forall uv\in E, f(u)\neq f(v)\} }[/math].

In 1959, Erdős proved the following theorem: for any fixed [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math], there exists a finite graph with girth at least [math]\displaystyle{ k }[/math] and chromatic number at least [math]\displaystyle{ \ell }[/math]. This is considered a striking example of the probabilistic method. The statement of the theorem itself calls for nothing about probability or randomness. And the result is highly contra-intuitive: if the girth is large there is no simple reason why the graph could not be colored with a few colors. We can always "locally" color a cycle with 2 or 3 colors. Erdős' result shows that there are "global" restrictions for coloring, and although such configurations are very difficult to explicitly construct, with the probabilistic method, we know that they commonly exist.

Theorem (Erdős 1959)
For all [math]\displaystyle{ k,\ell }[/math] there exists a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k\, }[/math].

It is very hard to directly analyze the chromatic number of a graph. We find that the chromatic number can be related to the size of the maximum independent set.

Definition (independence number)
  • The independence number of [math]\displaystyle{ G }[/math], denoted [math]\displaystyle{ \alpha(G) }[/math], is the size of the largest independent set in [math]\displaystyle{ G }[/math]. Formally,
[math]\displaystyle{ \alpha(G)=\max\{|S|\mid S\subseteq V\mbox{ and }\forall u,v\in S, uv\not\in E\} }[/math].

We observe the following relationship between the chromatic number and the independence number.

Lemma
For any [math]\displaystyle{ n }[/math]-vertex graph,
[math]\displaystyle{ \chi(G)\ge\frac{n}{\alpha(G)} }[/math].
Proof.
  • In the optimal coloring, [math]\displaystyle{ n }[/math] vertices are partitioned into [math]\displaystyle{ \chi(G) }[/math] color classes according to the vertex color.
  • Every color class is an independent set, or otherwise there exist two adjacent vertice with the same color.
  • By the pigeonhole principle, there is a color class (hence an independent set) of size [math]\displaystyle{ \frac{n}{\chi(G)} }[/math]. Therefore, [math]\displaystyle{ \alpha(G)\ge\frac{n}{\chi(G)} }[/math].

The lemma follows.

[math]\displaystyle{ \square }[/math]

Therefore, it is sufficient to prove that [math]\displaystyle{ \alpha(G)\le\frac{n}{k} }[/math] and [math]\displaystyle{ g(G)\gt \ell }[/math].

Proof of Erdős theorem

Fix [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Let [math]\displaystyle{ G }[/math] be [math]\displaystyle{ G(n,p) }[/math] with [math]\displaystyle{ p=n^{\theta-1} }[/math].

For any length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math], let [math]\displaystyle{ X_\sigma }[/math] be the indicator random variable such that

[math]\displaystyle{ X_\sigma= \begin{cases} 1 & \sigma\mbox{ is a cycle in }G,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

The number of cycles of length at most [math]\displaystyle{ \ell }[/math] in graph [math]\displaystyle{ G }[/math] is

[math]\displaystyle{ X=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}X_\sigma }[/math].

For any particular length-[math]\displaystyle{ i }[/math] simple cycle [math]\displaystyle{ \sigma }[/math],

[math]\displaystyle{ \mathbf{E}[X_\sigma]=\Pr[X_\sigma=1]=\Pr[\sigma\mbox{ is a cycle in }G]=p^i=n^{\theta i-i} }[/math].

For any [math]\displaystyle{ 3\le i\le n }[/math], the number of length-[math]\displaystyle{ i }[/math] simple cycle is [math]\displaystyle{ \frac{n(n-1)\cdots (n-i+1)}{2i!} }[/math]. By the linearity of expectation,

[math]\displaystyle{ \mathbf{E}[X]=\sum_{i=3}^\ell\sum_{\sigma:i\text{-cycle}}\mathbf{E}[X_\sigma]=\sum_{i=3}^\ell\frac{n(n-1)\cdots (n-i+1)}{2i!}n^{\theta i-i}\le \sum_{i=3}^\ell\frac{n^{\theta i}}{2i!}=o(n) }[/math].

Applying Markov's inequality,

[math]\displaystyle{ \Pr\left[X\ge \frac{n}{2}\right]\le\frac{\mathbf{E}[X]}{n/2}=o(1). }[/math]

Therefore, with high probability the random graph has less than [math]\displaystyle{ n/2 }[/math] short cycles.

Now we proceed to analyze the independence number. Let [math]\displaystyle{ m=\left\lceil\frac{3\ln n}{p}\right\rceil }[/math], so that

[math]\displaystyle{ \begin{align} \Pr[\alpha(G)\ge m] &\le\Pr\left[\exists S\in{V\choose m}\forall \{u,v\}\in{S\choose 2}, uv\not\in G\right]\\ &\le{n\choose m}(1-p)^{m\choose 2}\\ &\lt n^m\mathrm{e}^{-p{m\choose 2}}\\ &=\left(n\mathrm{e}^{-p(m-1)/2}\right)^m=o(1) \end{align} }[/math]

The probability that either of the above events occurs is

[math]\displaystyle{ \begin{align} \Pr\left[X\lt \frac{n}{2}\vee \alpha(G)\lt m\right] \le \Pr\left[X\lt \frac{n}{2}\right]+\Pr\left[\alpha(G)\lt m\right] =o(1). \end{align} }[/math]

Therefore, there exists a graph [math]\displaystyle{ G }[/math] with less than [math]\displaystyle{ n/2 }[/math] "short" cycles, i.e., cycles of length at most [math]\displaystyle{ \ell }[/math], and with [math]\displaystyle{ \alpha(G)\lt m\le 3n^{1-\theta}\ln n }[/math].

Take each "short" cycle in [math]\displaystyle{ G }[/math] and remove a vertex from the cycle (and also remove all adjacent edges to the removed vertex). This gives a graph [math]\displaystyle{ G' }[/math] which has no short cycles, hence the girth [math]\displaystyle{ g(G')\ge\ell }[/math]. And [math]\displaystyle{ G' }[/math] has at least [math]\displaystyle{ n/2 }[/math] vertices, because at most [math]\displaystyle{ n/2 }[/math] vertices are removed.

Notice that removing vertices cannot makes [math]\displaystyle{ \alpha(G) }[/math] grow. It holds that [math]\displaystyle{ \alpha(G')\le\alpha(G) }[/math]. Thus

[math]\displaystyle{ \chi(G')\ge\frac{n/2}{\alpha(G')}\ge\frac{n}{2m}\ge\frac{n^\theta}{6\ln n} }[/math].

The theorem is proved by taking [math]\displaystyle{ n }[/math] sufficiently large so that this value is greater than [math]\displaystyle{ k }[/math].

[math]\displaystyle{ \square }[/math]

The proof contains a very simple procedure which for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math] generates such a graph [math]\displaystyle{ G }[/math] with [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math]. The procedure is as such:

  • Fix some [math]\displaystyle{ \theta\lt \frac{1}{\ell} }[/math]. Choose sufficiently large [math]\displaystyle{ n }[/math] with [math]\displaystyle{ \frac{n^\theta}{6\ln n}\gt k }[/math], and let [math]\displaystyle{ p=n^{\theta-1} }[/math].
  • Generate a random graph [math]\displaystyle{ G }[/math] as [math]\displaystyle{ G(n,p) }[/math].
  • For each cycle of length at most [math]\displaystyle{ \ell }[/math] in [math]\displaystyle{ G }[/math], remove a vertex from the cycle.

The resulting graph [math]\displaystyle{ G' }[/math] satisfying that [math]\displaystyle{ g(G)\gt \ell }[/math] and [math]\displaystyle{ \chi(G)\gt k }[/math] with high probability.