Combinatorics (Fall 2010)/Existence, the probabilistic method

From TCS Wiki
Revision as of 08:20, 2 October 2010 by imported>WikiSysop (→‎Monotonic subsequences)
Jump to navigation Jump to search

Counting arguments

Circuit complexity

This is a fundamental problem in in Computer Science.

A boolean function is a function in the form [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Boolean circuit is a mathematical model of computation. Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled [math]\displaystyle{ x_1, x_2, \ldots , x_n }[/math]. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).

Computations in Turing machines can be simulated by circuits, and any boolean function in P can be computed by a circuit with polynomially many gates. Thus, if we can find a function in NP that cannot be computed by any circuit with polynomially many gates, then NP[math]\displaystyle{ \neq }[/math]P.

The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.

Theorem (Shannon 1949)
There is a boolean function [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math] with circuit complexity greater than [math]\displaystyle{ \frac{2^n}{3n} }[/math].
Proof.

We first count the number of boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math]. There are [math]\displaystyle{ 2^{2^n} }[/math] boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Then we count the number of boolean circuit with fixed number of gates. Fix an integer [math]\displaystyle{ t }[/math], we count the number of circuits with [math]\displaystyle{ t }[/math] gates. By the De Morgan's laws, we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable [math]\displaystyle{ x_i }[/math], an inverted input variable [math]\displaystyle{ \neg x_i }[/math], or the output of another gate; thus, there are at most [math]\displaystyle{ 2+2n+t-1 }[/math] possible gate inputs. It follows that the number of circuits with [math]\displaystyle{ t }[/math] gates is at most [math]\displaystyle{ 2^t(t+2n+1)^{2t} }[/math].

If [math]\displaystyle{ t=2^n/3n }[/math], then

[math]\displaystyle{ \frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)\lt 1, }[/math] thus, [math]\displaystyle{ 2^t(t+2n+1)^{2t} \lt 2^{2^n}. }[/math]

Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function [math]\displaystyle{ f }[/math] which cannot be computed by any circuits with [math]\displaystyle{ 2^n/3n }[/math] gates.

[math]\displaystyle{ \square }[/math]

Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but almost all boolean functions have exponentially large circuit complexity.

Double counting

The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.

Handshaking lemma

The following lemma is a standard demonstration of double counting.

Handshaking Lemma
At a party, the number of guests who shake hands an odd number of times is even.

We model this scenario as an undirected graph [math]\displaystyle{ G(V,E) }[/math] with [math]\displaystyle{ |V|=n }[/math] standing for the [math]\displaystyle{ n }[/math] guests. There is an edge [math]\displaystyle{ uv\in E }[/math] if [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] shake hands. Let [math]\displaystyle{ d(v) }[/math] be the degree of vertex [math]\displaystyle{ v }[/math], which represents the number of times that [math]\displaystyle{ v }[/math] shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.

The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on Seven Bridges of Königsberg that began the study of graph theory.

Lemma (Euler 1736)
[math]\displaystyle{ \sum_{v\in V}d(v)=2|E| }[/math]
Proof.

We count the number of directed edges. A directed edge is an ordered pair [math]\displaystyle{ (u,v) }[/math] such that [math]\displaystyle{ \{u,v\}\in E }[/math]. There are two ways to count the directed edges.

First, we can enumerate by edges. Pick every edge [math]\displaystyle{ uv\in E }[/math] and apply two directions [math]\displaystyle{ (u,v) }[/math] and [math]\displaystyle{ (v,u) }[/math] to the edge. This gives us [math]\displaystyle{ 2|E| }[/math] directed edges.

On the other hand, we can enumerate by vertices. Pick every vertex [math]\displaystyle{ v\in V }[/math] and for each of its [math]\displaystyle{ d(v) }[/math] neighbors, say [math]\displaystyle{ u }[/math], generate a directed edge [math]\displaystyle{ (v,u) }[/math]. This gives us [math]\displaystyle{ \sum_{v\in V}d(v) }[/math] directed edges.

It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.

[math]\displaystyle{ \square }[/math]

The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.

Cayley's formula

We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to Cayley in 1889. The theorem is often referred by the name Cayley's formula.

Caylay's formula for trees
There are [math]\displaystyle{ n^{n-2} }[/math] different trees on [math]\displaystyle{ n }[/math] distinct vertices.

The theorem has several proofs. Classical methods include the bijection which encodes a tree by a Prüfer sequence, and through the Kirchhoff's matrix tree theorem. Here we present a proof by double counting, which is considered by the Proofs from THE BOOK "the most beautiful of them all".

Proof.
(Due to Pitman 1999)
[math]\displaystyle{ \square }[/math]

The Pigeonhole Principle

The pigeonhole principle states the following "obvious" fact:

[math]\displaystyle{ n+1 }[/math] pigeons cannot sit in [math]\displaystyle{ n }[/math] holes so that every pigeon is alone in its hole.

More generally, the pigeonhole principle states as the following.

Generalized pigeonhole principle
If a set consisting of more than [math]\displaystyle{ mn }[/math] objects is partitioned into [math]\displaystyle{ n }[/math] classes, then some class receives more than [math]\displaystyle{ m }[/math] objects.

This is one of the oldest non-constructive principles: it states only the existence of a pigeonhole with more than [math]\displaystyle{ m }[/math] pigeons and says nothing about how to find such a pigeonhole.

Monotonic subsequences

Let [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] be a sequence of [math]\displaystyle{ n }[/math] distinct real numbers. A subsequence is a sequence of distinct terms of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] appearing in the same order in which they appear in [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math]. Formally, a subsequence of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is an [math]\displaystyle{ (a_{i_1},a_{i_2},\ldots,a_{i_k}) }[/math], with [math]\displaystyle{ i_1\lt i_2\lt \cdots\lt i_k }[/math].

A sequence [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is increasing if [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math], and decreasing if [math]\displaystyle{ a_1\gt a_2\gt \cdots\gt a_n }[/math].

We are interested in the longest increasing and decreasing subsequences of an [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math]. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. The result is in an influential paper pioneering extremal combinatorics.

Theorem (Erdős-Szekeres 1935)
A sequence of more than [math]\displaystyle{ mn }[/math] different real numbers must contain either an increasing subsequence of length [math]\displaystyle{ m+1 }[/math], or a decreasing subsequence of length [math]\displaystyle{ n+1 }[/math].
Proof.
(due to Seidenberg 1959)
[math]\displaystyle{ \square }[/math]

Dirichlet's approximation

Theorem (Dirichlet 1879)
Let [math]\displaystyle{ x }[/math] be a real number. For any natural number [math]\displaystyle{ n }[/math], there is a rational number [math]\displaystyle{ \frac{p}{q} }[/math] such that [math]\displaystyle{ 1\le q\le n }[/math] and
[math]\displaystyle{ \left|x-\frac{p}{q}\right|\lt \frac{1}{nq} }[/math].

The Probabilistic Method

The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.

The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:

  • If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
  • Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
For example, if we know the average height of the students in the class is [math]\displaystyle{ \ell }[/math], then we know there is a students whose height is at least [math]\displaystyle{ \ell }[/math], and there is a student whose height is at most [math]\displaystyle{ \ell }[/math].

Although the idea of the probabilistic method is simple, it provides us a powerful tool for existential proof.

Ramsey number

Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of [math]\displaystyle{ K_6 }[/math] (the complete graph on six vertices), there must be a monochromatic [math]\displaystyle{ K_3 }[/math] (a triangle whose edges have the same color).

Generally, the Ramsey number [math]\displaystyle{ R(k,\ell) }[/math] is the smallest integer [math]\displaystyle{ n }[/math] such that in any two-coloring of the edges of a complete graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ K_n }[/math] by red and blue, either there is a red [math]\displaystyle{ K_k }[/math] or there is a blue [math]\displaystyle{ K_\ell }[/math].

Ramsey showed in 1929 that [math]\displaystyle{ R(k,\ell) }[/math] is finite for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math]. It is extremely hard to compute the exact value of [math]\displaystyle{ R(k,\ell) }[/math]. Here we give a lower bound of [math]\displaystyle{ R(k,k) }[/math] by the probabilistic method.

Theorem (Erdős 1947)
If [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math] then it is possible to color the edges of [math]\displaystyle{ K_n }[/math] with two colors so that there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.
Proof.
Consider a random two-coloring of edges of [math]\displaystyle{ K_n }[/math] obtained as follows:
  • For each edge of [math]\displaystyle{ K_n }[/math], independently flip a fair coin to decide the color of the edge.

For any fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ \mathcal{E}_S }[/math] be the event that the [math]\displaystyle{ K_k }[/math] subgraph induced by [math]\displaystyle{ S }[/math] is monochromatic. There are [math]\displaystyle{ {k\choose 2} }[/math] many edges in [math]\displaystyle{ K_k }[/math], therefore

[math]\displaystyle{ \Pr[\mathcal{E}_S]=2\cdot 2^{-{k\choose 2}}=2^{1-{k\choose 2}}. }[/math]

Since there are [math]\displaystyle{ {n\choose k} }[/math] possible choices of [math]\displaystyle{ S }[/math], by the union bound

[math]\displaystyle{ \Pr[\exists S, \mathcal{E}_S]\le {n\choose k}\cdot\Pr[\mathcal{E}_S]={n\choose k}\cdot 2^{1-{k\choose 2}}. }[/math]

Due to the assumption, [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math], thus there exists a two coloring that none of [math]\displaystyle{ \mathcal{E}_S }[/math] occurs, which means there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ k\ge 3 }[/math] and we take [math]\displaystyle{ n=\lfloor2^{k/2}\rfloor }[/math], then

[math]\displaystyle{ \begin{align} {n\choose k}\cdot 2^{1-{k\choose 2}} &\lt \frac{n^k}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &\le \frac{2^{k^2/2}}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &= \frac{2^{1+\frac{k}{2}}}{k!}\\ &\lt 1. \end{align} }[/math]

By the above theorem, there exists a two-coloring of [math]\displaystyle{ K_n }[/math] that there is no monochromatic [math]\displaystyle{ K_k }[/math]. Therefore, the Ramsey number [math]\displaystyle{ R(k,k)\gt \lfloor2^{k/2}\rfloor }[/math] for all [math]\displaystyle{ k\ge 3 }[/math].

Tournament

A tournament (竞赛图) on a set [math]\displaystyle{ V }[/math] of [math]\displaystyle{ n }[/math] players is an orientation of the edges of the complete graph on the set of vertices [math]\displaystyle{ V }[/math]. Thus for every two distinct vertices [math]\displaystyle{ u,v }[/math] in [math]\displaystyle{ V }[/math], either [math]\displaystyle{ (u,v)\in E }[/math] or [math]\displaystyle{ (v,u)\in E }[/math], but not both.

We can think of the set [math]\displaystyle{ V }[/math] as a set of [math]\displaystyle{ n }[/math] players in which each pair participates in a single match, where [math]\displaystyle{ (u,v) }[/math] is in the tournament iff player [math]\displaystyle{ u }[/math] beats player [math]\displaystyle{ v }[/math].

Definition
We say that a tournament has [math]\displaystyle{ k }[/math]-paradoxical if for every set of [math]\displaystyle{ k }[/math] players there is a player who beats them all.

Is it true for every finite [math]\displaystyle{ k }[/math], there is a [math]\displaystyle{ k }[/math]-paradoxical tournament (on more than [math]\displaystyle{ k }[/math] vertices, of course)? This problem was first raised by Schütte, and as shown by Erdős, can be solved almost trivially by the probabilistic method.

Theorem (Erdős 1963)
If [math]\displaystyle{ {n\choose k}\left(1-2^{-k}\right)^{n-k}\lt 1 }[/math] then there is a tournament on [math]\displaystyle{ n }[/math] vertices that is [math]\displaystyle{ k }[/math]-paradoxical.

Linearity of expectation

Let [math]\displaystyle{ X }[/math] be a discrete random variable. The expectation of [math]\displaystyle{ X }[/math] is defined as follows.

Definition (Expectation)
The expectation of a discrete random variable [math]\displaystyle{ X }[/math], denoted by [math]\displaystyle{ \mathbf{E}[X] }[/math], is given by
[math]\displaystyle{ \begin{align} \mathbf{E}[X] &= \sum_{x}x\Pr[X=x], \end{align} }[/math]
where the summation is over all values [math]\displaystyle{ x }[/math] in the range of [math]\displaystyle{ X }[/math].

A fundamental fact regarding the expectation is its linearity.

Theorem (Linearity of Expectations)
For any discrete random variables [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math], and any real constants [math]\displaystyle{ a_1, a_2, \ldots, a_n }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i]. \end{align} }[/math]
Hamiltonian paths

The following result of Szele in 1943 is often considered the first use of the probabilistic method.

Theorem (Szele 1943)
There is a tournament on [math]\displaystyle{ n }[/math] players with at least [math]\displaystyle{ n!2^{-(n-1)} }[/math] Hamiltonian paths.

Independent sets

An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.

Theorem
Let [math]\displaystyle{ G(V,E) }[/math] be a graph on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ m }[/math] edges. Then [math]\displaystyle{ G }[/math] has an independent set with at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.
Proof.
Let [math]\displaystyle{ S }[/math] be a set of vertices constructed as follows:
For each vertex [math]\displaystyle{ v\in V }[/math]:
  • [math]\displaystyle{ v }[/math] is included in [math]\displaystyle{ S }[/math] independently with probability [math]\displaystyle{ p }[/math],

[math]\displaystyle{ p }[/math] to be determined.

Let [math]\displaystyle{ X=|S| }[/math]. It is obvious that [math]\displaystyle{ \mathbf{E}[X]=np }[/math].

For each edge [math]\displaystyle{ e\in E }[/math], let [math]\displaystyle{ Y_{e} }[/math] be the random variable which indicates whether both endpoints of [math]\displaystyle{ }[/math] are in [math]\displaystyle{ S }[/math].

[math]\displaystyle{ \mathbf{E}[Y_{uv}]=\Pr[u\in S\wedge v\in S]=p^2. }[/math]

Let [math]\displaystyle{ Y }[/math] be the number of edges in the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]. It holds that [math]\displaystyle{ Y=\sum_{e\in E}Y_e }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[Y]=\sum_{e\in E}\mathbf{E}[Y_e]=mp^2 }[/math].

Note that although [math]\displaystyle{ S }[/math] is not necessary an independent set, it can be modified to one if for each edge [math]\displaystyle{ e }[/math] of the induced subgraph [math]\displaystyle{ G(S) }[/math], we delete one of the endpoint of [math]\displaystyle{ e }[/math] from [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ S^* }[/math] be the resulting set. It is obvious that [math]\displaystyle{ S^* }[/math] is an independent set since there is no edge left in the induced subgraph [math]\displaystyle{ G(S^*) }[/math].

Since there are [math]\displaystyle{ Y }[/math] edges in [math]\displaystyle{ G(S) }[/math], there are at most [math]\displaystyle{ Y }[/math] vertices in [math]\displaystyle{ S }[/math] are deleted to make it become [math]\displaystyle{ S^* }[/math]. Therefore, [math]\displaystyle{ |S^*|\ge X-Y }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge\mathbf{E}[X-Y]=\mathbf{E}[X]-\mathbf{E}[Y]=np-mp^2. }[/math]

The expectation is maximized when [math]\displaystyle{ p=\frac{n}{2m} }[/math], thus

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge n\cdot\frac{n}{2m}-m\left(\frac{n}{2m}\right)^2=\frac{n^2}{4m}. }[/math]

There exists an independent set which contains at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.

[math]\displaystyle{ \square }[/math]