Combinatorics (Fall 2010)/Existence, the probabilistic method: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
imported>WikiSysop
m (Protected "Combinatorics (Fall 2010)/Existence, the probabilistic method" ([edit=sysop] (indefinite) [move=sysop] (indefinite)))
 
(74 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== The Basic Idea ==
== Counting arguments ==
;Circuit complexity


Suppose we want prove the existence of mathematic objects with certain properties. One way to do so is to explicitly construct such an object. This kind of proofs can be interpreted as ''deterministic algorithms'' which find the object with desirable properties.
This is a fundamental problem in in Computer Science.


The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.
A '''boolean function''' is a function in the form <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.
 
The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:
*If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
:For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
*Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
:For example, if we know the average height of the students in the class is <math>\ell</math>, then we know there is a students whose height is at least <math>\ell</math>, and there is a student whose height is at most <math>\ell</math>.
 
Although the idea of  the probabilistic method is simple, it provides us a powerful tool for existential proof. In same cases, the proof itself is a ''randomized algorithm'', and if we are lucky, the algorithm could be very efficient.
 
=== Counting or sampling ===
 
;Circuit complexity
 
A '''boolean function''' is a function is the form <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.


[http://en.wikipedia.org/wiki/Boolean_circuit Boolean circuit] is a mathematical model of computation.
Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled <math>x_1, x_2, \ldots , x_n</math>. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).
Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled <math>x_1, x_2, \ldots , x_n</math>. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).


Line 29: Line 17:
:There is a boolean function <math>f:\{0,1\}^n\rightarrow \{0,1\}</math> with circuit complexity greater than <math>\frac{2^n}{3n}</math>.
:There is a boolean function <math>f:\{0,1\}^n\rightarrow \{0,1\}</math> with circuit complexity greater than <math>\frac{2^n}{3n}</math>.
}}
}}
{{Proof| There are <math>2^{2^n}</math> boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.
{{Proof|  
We first count the number of boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>. There are <math>2^{2^n}</math> boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.


Fix an integer <math>t</math>, we then count the number of circuits with <math>t</math> gates. By the [http://en.wikipedia.org/wiki/De_Morgan's_laws De Morgan's laws], we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable <math>x_i</math>, an inverted input variable <math>\neg x_i</math>, or the output of another gate; thus, there are at most <math>2+2n+t-1</math> possible gate inputs. It follows that the number of circuits with <math>t</math> gates is at most <math>2^t(t+2n+1)^{2t}</math>.  
Then we count the number of boolean circuit with fixed number of gates.
Fix an integer <math>t</math>, we count the number of circuits with <math>t</math> gates. By the [http://en.wikipedia.org/wiki/De_Morgan's_laws De Morgan's laws], we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable <math>x_i</math>, an inverted input variable <math>\neg x_i</math>, or the output of another gate; thus, there are at most <math>2+2n+t-1</math> possible gate inputs. It follows that the number of circuits with <math>t</math> gates is at most <math>2^t(t+2n+1)^{2t}</math>.  


Uniformly choose a boolean function <math>f</math> at random. Note that each circuit can compute one boolean function (the converse is not true). The probability that <math>f</math> can be computed by a circuit with <math>t</math> gates is at most
:<math>
\frac{2^t(t+2n+1)^{2t}}{2^{2^n}}.
</math>
If <math>t=2^n/3n</math>, then
If <math>t=2^n/3n</math>, then
:<math>
:<math>
\frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)<1.
\frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)<1,</math>      thus, <math>2^t(t+2n+1)^{2t} < 2^{2^n}.</math>
</math>


Therefore, there exists a boolean function <math>f</math> which cannot be computed by any circuits with <math>2^n/3n</math> gates.
Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function <math>f</math> which cannot be computed by any circuits with <math>2^n/3n</math> gates.
}}
}}


Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but ''almost all'' boolean functions have exponentially large circuit complexity.
Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but ''almost all'' boolean functions have exponentially large circuit complexity.


=== Double counting ===
The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.
;Handshaking lemma
The following lemma is a standard demonstration of double counting.
{{Theorem|Handshaking Lemma|
:At a party, the number of guests who shake hands an odd number of times is even.
}}
We model this scenario as an undirected graph <math>G(V,E)</math> with <math>|V|=n</math> standing for the <math>n</math> guests. There is an edge <math>uv\in E</math> if <math>u</math> and <math>v</math> shake hands. Let <math>d(v)</math> be the degree of vertex <math>v</math>, which represents the number of times that <math>v</math> shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.


;Ramsey number
The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on [http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg Seven Bridges of Königsberg] that began the study of graph theory.
 
{{Theorem|Lemma (Euler 1736)|
:<math>\sum_{v\in V}d(v)=2|E|</math>
}}
{{Proof|
We count the number of '''directed''' edges. A directed edge is an ordered pair <math>(u,v)</math> such that <math>\{u,v\}\in E</math>. There are two ways to count the directed edges.
 
First, we can enumerate by edges. Pick every edge <math>uv\in E</math> and apply two directions <math>(u,v)</math> and <math>(v,u)</math> to the edge. This gives us <math>2|E|</math> directed edges.
 
On the other hand, we can enumerate by vertices. Pick every vertex <math>v\in V</math> and for each of its <math>d(v)</math> neighbors, say <math>u</math>, generate a directed edge <math>(v,u)</math>. This gives us <math>\sum_{v\in V}d(v)</math> directed edges.
 
It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.
}}
 
The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.
 
;Cayley's formula
We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to [http://en.wikipedia.org/wiki/Arthur_Cayley Cayley] in 1889. The theorem is often referred by the name [http://en.wikipedia.org/wiki/Cayley's_formula Cayley's formula].
 
{{Theorem|Caylay's formula for trees|
: There are <math>n^{n-2}</math> different trees on <math>n</math> distinct vertices.
}}
 
The theorem has several proofs. Classical methods include the bijection which encodes a tree by a [http://en.wikipedia.org/wiki/Pr%C3%BCfer_sequence Prüfer sequence], and through the [http://en.wikipedia.org/wiki/Kirchhoff's_matrix_tree_theorem Kirchhoff's matrix tree theorem]. Here we present a proof by double counting, which is considered by the [http://en.wikipedia.org/wiki/Proofs_from_THE_BOOK Proofs from THE BOOK] "the most beautiful of them all".
{{Proof|(Due to Pitman 1999)
 
Let <math>T_n</math> be the number of different trees defined on <math>n</math> distinct vertices.
 
A '''rooted tree''' is a tree with a special vertex. That is, one of the <math>n</math> vertices is marked as the "root" of the tree. A rooted tree defines a natural direction of all edges, such that an edge <math>uv</math> of the tree is directed from <math>u</math> to <math>v</math> if <math>u</math> is before <math>v</math> along the unique path from the root.
 
We count the number of different ''sequences'' of directed edges that can be added to an empty graph on <math>n</math> vertices to form from it a ''rooted'' tree. We note that such a sequence can be formed in two ways:
# Starting with an unrooted tree, choose one of its vertices as root, and fix an total order of edges to specify the order in which the edges are added.
# Starting from an empty graph, add the edges one by one in steps.
 
In the first method, we pick one of the <math>T_n</math> unrooted trees, choose one of the <math>n</math> vertices as the root, and pick one of the <math>(n-1)!</math> total orders of the <math>n-1</math> edges. This gives us <math>T_nn(n-1)!=T_nn!</math> ways.
 
In the second method, we consider the number of choices in one step, and multiply the numbers of choices in all steps. This is done as follows.
 
Given a sequence of ''adding'' <math>n-1</math> edges to an empty graph to form a rooted tree, we reverse this sequence and get a sequence of ''removing'' edges one by one from the final rooted tree until no edge left. We observe that:
* At first, we remove an edge from the rooted tree. Suppose that the root of the tree is <math>r</math>, and the removed directed edge is <math>(u,v)</math>.  After removing <math>(u,v)</math>, the original rooted tree is disconnected into two rooted trees, one rooted at <math>r</math> and the other rooted at <math>v</math>.
* After removing <math>k-1</math> edges, there are <math>k</math> rooted trees. In the <math>k</math>th step, a directed edge <math>(u,v)</math> in the current forest is removed and the tree containing <math>(u,v)</math> is disconnected into two trees, one rooted at the old root of that tree, and the other rooted at <math>v</math>.
 
We now again reverse the above procedure, and consider the sequence of adding directed edges to an empty graph to form a rooted tree.
* At first, we have <math>n</math> rooted trees, each of 0 edge (<math>n</math> isolated vertices).
* After adding <math>n-k</math> edges, there are <math>k</math> rooted trees. Denoting the directed edge added next as <math>(u,v)</math>. As observed above, <math>u</math> can be any one of the <math>n</math> vertices; but <math>v</math> must be the root of one of the <math>k</math> trees, except the tree which contains <math>u</math>. There are <math>n(k-1)</math> choices of such <math>(u,v)</math>.
Multiplying the numbers of choices in all steps, the number of sequences of adding directed edges to an empty graph to form a rooted tree is given by
:<math>\prod_{k=2}^nn(k-1)=n^{n-2}n!</math>.
 
By the principle of double counting, counting the same thing by different methods yield the same result.
:<math>T_nn!=n^{n-2}n!</math>,
which gives that <math>T_n=n^{n-2}</math>.
}}
 
== The Pigeonhole Principle ==
The '''pigeonhole principle''' states the following "obvious" fact:
:''<math>n+1</math> pigeons cannot sit in <math>n</math> holes so that every pigeon is alone in its hole.''
More generally, the pigeonhole principle states as the following.
{{Theorem|Generalized pigeonhole principle|
:If a set consisting of more than <math>mn</math> objects is partitioned into <math>n</math> classes, then some class receives more than <math>m</math> objects.
}}
 
This is one of the oldest '''non-constructive''' principles: it states only the ''existence'' of a pigeonhole with more than <math>m</math> pigeons and says nothing about how to ''find'' such a pigeonhole.
 
=== Monotonic subsequences ===
Let <math>(a_1,a_2,\ldots,a_n)</math> be a sequence of <math>n</math> distinct real numbers. A '''subsequence''' is a sequence of distinct terms of <math>(a_1,a_2,\ldots,a_n)</math> appearing in the same order in which they appear in <math>(a_1,a_2,\ldots,a_n)</math>. Formally, a subsequence of <math>(a_1,a_2,\ldots,a_n)</math> is an <math>(a_{i_1},a_{i_2},\ldots,a_{i_k})</math>, with <math>i_1<i_2<\cdots<i_k</math>.
 
A sequence <math>(a_1,a_2,\ldots,a_n)</math> is '''increasing''' if <math>a_1<a_2<\cdots<a_n</math>, and '''decreasing''' if <math>a_1>a_2>\cdots>a_n</math>.
 
We are interested in the ''longest'' increasing and decreasing subsequences of an <math>a_1<a_2<\cdots<a_n</math>. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. This is one of the first results in extremal combinatorics, published in the influential 1935 paper of Erdős and Szekeres.
 
{{Theorem|Theorem (Erdős-Szekeres 1935)|
:A sequence of more than <math>mn</math> different real numbers must contain either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
}}
{{Proof|(due to Seidenberg 1959)
Let <math>(a_1,a_2,\ldots,a_{N})</math> be the original sequence of <math>N>mn</math> distinct real numbers. Associate each <math>a_i</math> a pair <math>(x_i,y_i)</math>, defined as:
*<math>x_i</math>: the length of the longest ''increasing'' subsequence ''ending'' at <math>a_i</math>;
*<math>y_i</math>: the length of the longest ''decreasing'' subsequence ''starting'' at <math>a_i</math>.
A key observation is that <math>(x_i,y_i)\neq (x_j,y_j)</math> whenever <math>i\neq j</math>. This is proved as follows:
: '''Case 1:''' If <math>a_i<a_j</math>, then the longest increasing subsequence ending at <math>a_i</math> can be extended by adding on <math>a_j</math>, so <math>x_i<x_j</math>.
: '''Case 2:'''  If <math>a_i>a_j</math>, then the longest decreasing subsequence starting at <math>a_j</math> can be preceded by <math>a_i</math>, so <math>y_i>y_j</math>.
Now we put <math>N</math> "pigeons" <math>a_1,a_2,\ldots,a_N</math> into "pigeonholes" <math>\{1,2,\ldots,N\}\times\{1,2,\ldots,N\}</math>, such that <math>a_i</math> is put into hole <math>(x_i,y_i)</math>, with at most one pigeon per each hole (since different <math>a_i</math> has different <math>(x_i,y_i)</math>).
 
The number of pigeons is <math>N>mn</math>. Due to pigeonhole principle, there must be a pigeon which is outside the region <math>\{1,2,\ldots,m\}\times\{1,2,\ldots,n\}</math>, which implies that there exists an <math>a_i</math> with either <math>x_i>m</math> or <math>y_i>n</math>. Due to our definition of <math>(x_i,y_i)</math>, there must be either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
}}
 
=== Dirichlet's approximation ===
Let <math>x</math> be an irrational number. We now want to approximate <math>x</math> be a rational number (a fraction).
 
Since every real interval <math>[a,b]</math> with <math>a<b</math> contains infinitely many rational numbers, there must exist rational numbers arbitrarily close to <math>x</math>. The trick is to let the denominator of the fraction sufficiently large.
 
Suppose however we restrict the rationals we may select to have denominators bounded by <math>n</math>. How closely we can approximate <math>x</math> now?
 
The following important theorem is due to Dirichlet and his ''Schubfachprinzip'' ("drawer principle"). The theorem is fundamental in numer theory and real analysis, but the proof is combinatorial.
 
{{Theorem|Theorem (Dirichlet 1879)|
:Let <math>x</math> be an irrational number. For any natural number <math>n</math>, there is a rational number <math>\frac{p}{q}</math> such that <math>1\le q\le n</math> and
::<math>\left|x-\frac{p}{q}\right|<\frac{1}{nq}</math>.
}}
{{Proof|
Let <math>\{x\}=x-\lfloor x\rfloor</math> denote the '''fractional part''' of the real number <math>x</math>. It is obvious that <math>\{x\}\in[0,1)</math> for any real number <math>x</math>.
 
Consider the <math>n+1</math> numbers <math>\{kx\}</math>, <math>k=1,2,\ldots,n+1</math>. These <math>n+1</math> numbers (pigeons) belong to the following <math>n</math> intervals (pigeonholes):
:<math>\left(0,\frac{1}{n}\right),\left(\frac{1}{n},\frac{2}{n}\right),\ldots,\left(\frac{n-1}{n},1\right)</math>.
Since <math>x</math> is irrational, <math>\{kx\}</math> cannot coincide with any endpoint of the above intervals.
 
By the pigeonhole principle, there exist <math>1\le a<b\le n+1</math>, such that <math>\{ax\},\{bx\}</math> are in the same interval, thus
:<math>|\{bx\}-\{ax\}|<\frac{1}{n}</math>.
Therefore,
:<math>|(b-a)x-\left(\lfloor bx\rfloor-\lfloor ax\rfloor\right)|<\frac{1}{n}</math>.
Let <math>q=b-a</math> and <math>p=\lfloor bx\rfloor-\lfloor ax\rfloor</math>. We have <math>|qx-p|<\frac{1}{n}</math> and <math>1\le q\le n</math>. Dividing both sides by <math>q</math>, the theorem is proved.
}}
 
== The Probabilistic Method ==
The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.
 
The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:
*If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
:For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
*Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
:For example, if we know the average height of the students in the class is <math>\ell</math>, then we know there is a students whose height is at least <math>\ell</math>, and there is a student whose height is at most <math>\ell</math>.
 
Although the idea of  the probabilistic method is simple, it provides us a powerful tool for existential proof.
 
===Ramsey number===


Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of <math>K_6</math> (the complete graph on six vertices), there must be a '''monochromatic''' <math>K_3</math> (a triangle whose edges have the same color).
Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of <math>K_6</math> (the complete graph on six vertices), there must be a '''monochromatic''' <math>K_3</math> (a triangle whose edges have the same color).
Line 88: Line 206:
By the above theorem, there exists a two-coloring of <math>K_n</math> that there is no monochromatic <math>K_k</math>. Therefore, the Ramsey number <math>R(k,k)>\lfloor2^{k/2}\rfloor</math> for all <math>k\ge 3</math>.
By the above theorem, there exists a two-coloring of <math>K_n</math> that there is no monochromatic <math>K_k</math>. Therefore, the Ramsey number <math>R(k,k)>\lfloor2^{k/2}\rfloor</math> for all <math>k\ge 3</math>.


Note that for sufficiently large <math>k</math>, if <math>n= \lfloor 2^{k/2}\rfloor</math>, then the probability that there exists a monochromatic <math>K_k</math> is bounded by
===Tournament===
:<math>
A '''[http://en.wikipedia.org/wiki/Tournament_(graph_theory) tournament]''' (竞赛图) on a set <math>V</math> of <math>n</math> players is an '''orientation''' of the edges of the complete graph on the set of vertices <math>V</math>. Thus for every two distinct vertices <math>u,v</math> in <math>V</math>, either <math>(u,v)\in E</math> or <math>(v,u)\in E</math>, but not both.
{n\choose k}\cdot 2^{1-{k\choose 2}}
<
\frac{2^{1+\frac{k}{2}}}{k!}
\ll 1,
</math>
which means that a random two-coloring of <math>K_n</math> is very likely not to contain a monochromatic  <math>K_{2\log n}</math>. This gives us a very simple randomized algorithm for finding a two-coloring of <math>K_n</math> without monochromatic <math>K_{2\log n}</math>.
 


;Blocking number
We can think of the set <math>V</math> as a set of <math>n</math> players in which each pair participates in a single match, where <math>(u,v)</math> is in the tournament iff player <math>u</math> beats player <math>v</math>.
Let <math>S</math> be a set. Let <math>2^{S}=\{A\mid A\subseteq S\}</math> be the power set of <math>S</math>, and let <math>{S\choose k}=\{A\mid A\subseteq S\mbox{ and }|A|=k\}</math> be the '''<math>k</math>-uniform''' of <math>S</math>.


We call <math>\mathcal{F}</math> a '''set family''' (or a '''set system''')  with '''ground set''' <math>S</math> if <math>\mathcal{F}\subseteq 2^{S}</math>. The members of <math>\mathcal{F}</math> are subsets of <math>S</math>.
{{Theorem|Definition|
:We say that a tournament has '''<math>k</math>-paradoxical''' if for every set of <math>k</math> players there is a player who beats them all.
}}


Given a set family <math>\mathcal{F}</math> with ground set <math>S</math>,
Is it true for every finite <math>k</math>, there is a <math>k</math>-paradoxical tournament (on more than <math>k</math> vertices, of course)? This problem was first raised by Schütte, and as shown by Erdős, can be solved almost trivially by the probabilistic method.
a set <math>T\subseteq S</math> is a '''blocking set''' of <math>\mathcal{F}</math> if all <math>A\in\mathcal{F}</math> have <math>A\cap T\neq \emptyset</math>, i.e. <math>T</math> intersects (blocks) all member set of <math>\mathcal{F}</math>.


{{Theorem
{{Theorem|Theorem (Erdős 1963)|
|Theorem|
:If <math>{n\choose k}\left(1-2^{-k}\right)^{n-k}<1</math> then there is a tournament on <math>n</math> vertices that is <math>k</math>-paradoxical.
:Given a set family <math>\mathcal{F}\subseteq{S\choose k}</math>, where <math>m=|\mathcal{F}|</math> and <math>n=|S|</math>, <math>\mathcal{F}</math> has a blocking set of size <math>\left\lceil\frac{n\ln m}{k}\right\rceil</math>.  
}}
}}
{{Proof| Let <math>\tau=\left\lceil\frac{n\ln m}{k}\right\rceil</math>. Let <math>T</math> be a set chosen uniformly at random from <math>{S\choose \tau}</math>. We show that <math>T</math> is a blocking set of <math>\mathcal{F}</math> with a probability >0.
{{Proof|
Consider a uniformly random tournament <math>T</math> on the set <math>V=[n]</math>. For every fixed subset <math>S\in{V\choose k}</math> of <math>k</math> vertices, let <math>A_S</math> be the event defined as follows
:<math>A_S:\,</math> there is no vertex in <math>V\setminus S</math> that beats all vertices in <math>S</math>.
 
In a uniform random tournament, the orientations of edges are independent. For any <math>u\in V\setminus S</math>,
:<math>\Pr[u\mbox{ beats all }v\in S]=2^{-k}</math>.
Therefore, <math>\Pr[u\mbox{ does not beats all }v\in S]=1-2^{-k}</math> and
:<math>\Pr[A_S]=\prod_{u\in V\setminus S}\Pr[u\mbox{ does not beats all }v\in S]=(1-2^{-k})^{n-k}</math>.


Fix any <math>A\in\mathcal{F}</math>. Recall that <math>\mathcal{F}\subseteq{S\choose k}</math>, thus <math>|A|=k</math>. And
It follows that
:<math>
:<math>\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\le \sum_{S\in{V\choose k}}\Pr[A_S]={n\choose k}(1-2^{-k})^{n-k}<1.</math>
\begin{align}
Therefore,
\Pr[A\cap T=\emptyset]
:<math>\Pr[\,T\mbox{ is }k\mbox{-paradoxical }]=\Pr\left[\bigwedge_{S\in{V\choose k}}\overline{A_S}\right]=1-\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]>0.</math>  
&=
There is a <math>k</math>-paradoxical tournament.
\frac{\left|{S-A\choose \tau}\right|}{\left|{S\choose \tau}\right|}\\
&=
\frac{{n-k\choose \tau}}{{n\choose\tau}}\\
&=
\frac{(n-k)\cdot(n-k-1)\cdots(n-k-\tau+1)}{n\cdot(n-1)\cdots(n-\tau+1)}\\
&<
\left(1-\frac{k}{n}\right)^{\tau}\\
&\le
\exp\left(-\frac{k\tau}{n}\right)\\
&\le
\frac{1}{m}.
\end{align}
</math>
By the union bound, the probability that there exists an <math>A\in\mathcal{F}</math> that misses <math>T</math>
:<math>
\Pr[\exists A\in\mathcal{F}, A\cap T=\emptyset]\le m\Pr[A\cap T=\emptyset]<m\cdot\frac{1}{m}=1.
</math>
Thus, the probability that <math>T</math> is a blocking set
:<math>
\Pr[\forall A\in\mathcal{F}, A\cap T\neq\emptyset]>0.
</math>
There exists a blocking set of size <math>\tau=\left\lceil\frac{n\ln m}{k}\right\rceil</math>.
}}
}}
The theorem also hints us to a randomized algorithm. In order to make the algorithm efficient, we relax the size of <math>T</math> to <math>\tau=\frac{2n\ln m}{k}</math>. Uniformly choose <math>\tau</math> elements from <math>S</math> to form the set <math>T</math>, by the above analysis, the probability that <math>T</math> is NOT a blocking set is at most
:<math>
m\exp\left(-\frac{n\tau}{k}\right)=m\exp(-2\ln m)=\frac{1}{m}.
</math>
Thus, a blocking set is found with high probability.


=== Linearity of expectation ===
=== Linearity of expectation ===
 
Let <math>X</math> be a discrete '''random variable'''. The expectation of <math>X</math> is defined as follows.
;Maximum cut
 
Given an undirected graph <math>G(V,E)</math>, a set <math>C</math> of edges of <math>G</math> is called a '''cut''' if <math>G</math> is disconnected after removing the edges in <math>C</math>. We can represent a cut by <math>c(S,T)</math> where <math>(S,T)</math> is a bipartition of the vertex set <math>V</math>, and <math>c(S,T)=\{uv\in E\mid u\in S,v\in T\}</math> is the set of edges crossing between <math>S</math> and <math>T</math>.
 
We have seen how to compute min-cut: either by deterministic max-flow algorithm, or by Karger's randomized algorithm. On the other hand, max-cut is hard to compute, because it is '''NP-complete'''. Actually, the weighted version of max-cut is among the [http://en.wikipedia.org/wiki/Karp's_21_NP-complete_problems Karp's 21 NP-complete problems].
 
We now show by the probabilistic method that a max-cut always has at least half the edges.
 
{{Theorem
{{Theorem
|Theorem|
|Definition (Expectation)|
:Given an undirected graph <math>G</math> with <math>n</math> vertices and <math>m</math> edges, there is a cut of size at least <math>\frac{m}{2}</math>.
:The '''expectation''' of a discrete random variable <math>X</math>, denoted by <math>\mathbf{E}[X]</math>, is given by
::<math>\begin{align}
\mathbf{E}[X] &= \sum_{x}x\Pr[X=x],
\end{align}</math>
:where the summation is over all values <math>x</math> in the range of <math>X</math>.
}}
}}
{{Proof| Enumerate the vertices in an arbitrary order. Partition the vertex set <math>V</math> into two disjoint sets <math>S</math> and <math>T</math> as follows.
:For each vertex <math>v\in V</math>,
:* independently choose one of <math>S</math> and <math>T</math> with equal probability, and let <math>v</math> join the chosen set.


For each vertex <math>v\in V</math>, let <math>X_v\in\{S,T\}</math> be the random variable which represents the set that <math>v</math> joins. For each edge <math>uv\in E</math>, let <math>Y_{uv}</math> be the 0-1 random variable which indicates whether <math>uv</math> crosses between <math>S</math> and <math>T</math>. Clearly,
A fundamental fact regarding the expectation is its '''linearity'''.
:<math>
\Pr[Y_{uv}=1]=\Pr[X_u\neq X_v]=\frac{1}{2}.
</math>


The size of <math>c(S,T)</math> is given by <math>Y=\sum_{uv\in E}Y_{uv}</math>. By the linearity of expectation,
{{Theorem
:<math>
|Theorem (Linearity of Expectations)|
\mathbf{E}[Y]=\sum_{uv\in E}\mathbf{E}[Y_{uv}]=\sum_{uv\in E}\Pr[Y_{uv}=1]=\frac{m}{2}.
:For any discrete random variables <math>X_1, X_2, \ldots, X_n</math>, and any real constants <math>a_1, a_2, \ldots, a_n</math>,
</math>
::<math>\begin{align}
Therefore, there exist a bipartition <math>(S,T)</math> of <math>V</math> such that <math>|c(S,T)|\ge\frac{m}{2}</math>, i.e. there exists a cut of <math>G</math> which contains at least <math>\frac{m}{2}</math> edges.
\mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i].
\end{align}</math>
}}
}}


 
;Hamiltonian paths
;Maximum satisfiability
The following result of Szele in 1943 is often considered the first use of the probabilistic method.
 
{{Theorem|Theorem (Szele 1943)|
Suppose that we have a number of boolean variables <math>x_1,x_2,\ldots,\in\{\mathrm{true},\mathrm{false}\}</math>. A '''literal''' is either a variable <math>x_i</math> itself or its negation <math>\neg x_i</math>. A logic expression is a '''conjunctive normal form (CNF)''' if it is written as the conjunction(AND) of a set of '''clauses''', where each clause is a disjunction(OR) of literals. For example:
:There is a tournament on <math>n</math> players with at least <math>n!2^{-(n-1)}</math> Hamiltonian paths.
:<math>
(x_1\vee \neg x_2 \vee \neg x_3)\wedge (\neg x_1\vee \neg x_3)\wedge (x_1\vee x_2\vee x_4)\wedge (x_4\vee \neg x_3)\wedge (x_4\vee \neg x_1).
</math>
 
The satisfiability (SAT) problem ask whether the CNF is satisfiable, i.e. there exists an assignment of variables to the values of true and false so that all clauses are true. The maximum satisfiability (MAXSAT) is the optimization version of SAT, which ask for an assignment that the number of satisfied clauses is maximized.
 
SAT is the first problem known to be '''NP-complete''' (the Cook-Levin theorem). MAXSAT is also '''NP-complete'''. We then see that there always exists a roughly good truth assignment which satisfies half the clauses.
 
{{Theorem
|Theorem|
:For any set of <math>m</math> clauses, there is a truth assignment that satisfies at least <math>\frac{m}{2}</math> clauses.
}}
}}
{{Proof| For each variable, independently assign a random value in <math>\{\mathrm{true},\mathrm{false}\}</math> with equal probability. For the <math>i</math>th clause, let <math>X_i</math> be the random variable which indicates whether the <math>i</math>th clause is satisfied. Suppose that there are <math>k</math> literals in the clause. The probability that the clause is satisfied is
{{Proof|
:<math>\Pr[X_k=1]\ge(1-2^{-k})\ge\frac{1}{2}</math>.
Consider the uniform random tournament <math>T</math> on <math>[n]</math>. For any permutation <math>\pi</math> of <math>[n]</math>, let <math>X_{\pi}</math> be the indicator random variable defined as
:<math>X_{\pi}=\begin{cases}
1 & \forall i\in[n-1], (\pi_i,\pi_{i+1})\in T,\\
0 & \mbox{otherwise}.
\end{cases}</math>
In other words, <math>X_{\pi}</math> indicates whether <math>\pi_0\rightarrow\pi_1\rightarrow\pi_2\rightarrow\cdots\rightarrow\pi_{n-1}</math> gives a Hamiltonian path.  
It holds that
:<math>\mathrm{E}[X_\pi]=1\cdot\Pr[X_\pi=1]+0\cdot\Pr[X_\pi=0]=\Pr[\forall i\in[n-1], (\pi_i,\pi_{i+1})\in T]=2^{-(n-1)}.</math>


Let <math>X=\sum_{i=1}^m X_i</math> be the number of satisfied clauses. By the linearity of expectation,
Let <math>X=\sum_{\pi:\text{permutation of }[n]}X_\pi\,</math>. Clearly <math>X</math> is the number of Hamiltonian paths in the tournament <math>T</math>.  
:<math>
Due to the linearity of expectation,
\mathbf{E}[X]=\sum_{i=1}^{m}\mathbf{E}[X_i]\ge \frac{m}{2}.
:<math>\mathrm{E}[X]=\mathrm{E}\left[\sum_{\pi:\text{permutation of }[n]}X_\pi\right]=\sum_{\pi:\text{permutation of }[n]}\mathrm{E}[X_\pi]=n!2^{-(n-1)}.</math>
</math>
This is the average number of Hamiltonian paths in a tournament, where the average is taken over all tournaments.
Therefore, there exists an assignment such that at least <math>\frac{m}{2}</math> clauses are satisfied.
Thus some tournament has at least <math>n!2^{-(n-1)}</math> Hamiltonian paths.
}}
}}


== Alterations ==
===Independent sets===
===Independent sets===
An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.
An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.
Line 237: Line 312:
}}
}}


The proof actually propose a randomized algorithm for constructing large independent set:
== References ==
 
:('''声明:''' 资料受版权保护, 仅用于教学.)
{{Theorem
:('''Disclaimer:''' The following copyrighted materials are meant for educational uses only.)
|Algorithm|
Given a graph on <math>n</math> vertices with <math>m</math> edges, let <math>d=\frac{2m}{n}</math> be the average degree.
#For each vertex <math>v\in V</math>, <math>v</math> is included in <math>S</math> independently with probability <math>\frac{1}{d}</math>.
#For each remaining edge in the induced subgraph <math>G(S)</math>, remove one of the endpoints from <math>S</math>.
}}
 
Let <math>S^*</math> be the resulting set. We have shown that <math>S^*</math> is an independent set and <math>\mathbf{E}[|S^*|]\ge\frac{n^2}{4m}</math>.


===Intersecting families===
* Aigner and Ziegler. ''Proofs from THE BOOK, 4th Edition.'' Springer-Verlag. [[media:PFTB_chap25.pdf| Chapter 25]] and [[media:PFTB_chap30.pdf| Chapter 30]].  
 
* Alon and Spencer. ''The Probabilistic Method, 3rd Edition.'' Wiley, 2008. [[media:TPM_Chap1.pdf|Chapter 1]], [[media:TPM_Chap2.pdf|Chapter 2]], and [[media:TPM_Chap3.pdf|Chapter 3]].
An <math>\mathcal{F}\subseteq 2^S</math> is an '''intersecting''' family if for any <math>A,B\in\mathcal{F}</math> it holds that <math>A\cap B\neq\emptyset</math>.
 
Suppose that <math>n\ge 2k</math>. For <math>\mathcal{F}\subseteq{S\choose k}</math>, where <math>|S|=n</math>, we can let all <math>A\in \mathcal{F}</math> contain one common element <math>a\in S</math> and <math>A-\{a\}</math> enumerates all <math>{n-1\choose k-1}</math> possible combinations of <math>(k-1)</math> elements in <math>S</math>. This gives us an intersecting family of size  <math>|\mathcal{F}|={n-1\choose k-1}</math>.
 
The following theorem says that this is the largest possible cardinality an intersecting <math>\mathcal{F}</math> can achieve. The theorem was first proved by Erdős, Ko, and Rado in 1938, but published 23 years later. It is a fundamental result in the area of extremal set theory, which studies the maximum (or minimum) possible cardinality of a set system satisfying certain structural assumption. In this example, the structural assumption is intersecting.
 
Here we present a probabilistic proof by Katona.
 
{{Theorem
|Theorem (Erdős-Ko-Rado 1961)|
:Let <math>\mathcal{F}\subseteq{S\choose k}</math>, where <math>|S|=n</math> and <math>n\ge 2k</math>. If <math>\mathcal{F}</math> is an intersecting family then <math>|\mathcal{F}|\le{n-1\choose k-1}</math>.
}}
{{Proof| (due to Katona 1972).
 
Without loss of generality, let <math>S=[n]</math>.
For <math>i\in[n]</math>, let <math>A_i=\{(i+j)\bmod n\mid j\in[k]\}</math>. Then we make the following claim.
 
:'''Claim 1:''' <math>\mathcal{F}</math> can contain at most <math>k</math> many <math>A_i</math>.
The claim can be easily proved by observing that for any <math>i,j\in[n]</math> that <math>i<j</math>, <math>A_i</math> and <math>A_j</math> are disjoint if <math>j-i>k</math>, thus in order to make <math>\mathcal{F}</math> intersecting, all <math>A_i,A_j\in\mathcal{F}</math> have <math>|i-j|\le k</math>. This is violated once there are more than <math>k</math> many <math>A_i</math> in <math>\mathcal{F}</math>.
 
Now we prove the Erdős-Ko-Rado theorem. Let a permutation <math>\sigma</math> of <math>[n]</math> and an integer <math>i\in[n]</math> be chosen uniformly and independently at random. Let
:<math>R=\{\sigma((i+j)\bmod n)\mid j\in[k]\}, \quad\mbox{ or equivalently }R=\sigma(A_i)</math>.
By Claim 1, for any fixed permutation <math>\sigma</math>, the family <math>\mathcal{F}</math> can contain at most <math>k</math> of the sets <math>\sigma(A_i)</math>, thus conditioning on any particular <math>\sigma</math>, <math>\Pr[R\in\mathcal{F}\mid \sigma]\le\frac{k}{n}</math>. Hence
:<math>
\Pr[R\in\mathcal{F}]\le\frac{k}{n}.
</math>
On the other hand, by our construction, <math>R</math> is uniformly chosen from <math>{S\choose k}</math>, thus
:<math>
\Pr[R\in\mathcal{F}]=\frac{|\mathcal{F}|}{{n\choose k}}.
</math>
Therefore,
<math>
|\mathcal{F}|\le\frac{k}{n}{n\choose k}={n-1\choose k-1}.
</math>
}}
 
== The Lovász Local Lemma ==
 
Consider a set of "bad" events <math>A_1,A_2,\ldots,A_n</math>. Suppose that <math>\Pr[A_i]\le p</math> for all <math>1\le i\le n</math>. We want to show that there is a situation that none of the bad events occurs. Due to the probabilistic method, we need to prove that
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0.
</math>
;Case 1<nowiki>: mutually independent events.</nowiki>
If all the bad events <math>A_1,A_2,\ldots,A_n</math> are mutually independent, then
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge(1-p)^n>0,
</math>
for any <math>p<1</math>.
 
;Case 2<nowiki>: arbitrarily dependent events.</nowiki>
On the other hand, if we put no assumption on the dependencies between the events, then by the union bound (which holds unconditionally),
:<math>
\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=1-\Pr\left[\bigvee_{i=1}^n A_i\right]\ge 1-np,
</math>
which is not an interesting bound for <math>p\ge\frac{1}{n}</math>. If we make no further assumption on the dependencies between the events, this bound is tight.
 
;Example
:Consider that a ball is uniformly thrown into one of the <math>(n+1)</math> bins. Let the "bad" events <math>A_1,A_2,\ldots,A_n</math> be defined as that <math>A_i</math> represents that the ball falls into the <math>i</math>th bin. The only good event is that the ball falls into the <math>(n+1)</math>th bin. Clearly, <math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]=1-n\cdot\frac{1}{n+1}</math>. Thus the above union bound is achieved.
 
This example shows that dependencies between the events could cause troubles.
 
 
We would like to know what is going on between the two extreme cases: mutually independent events, and arbitrarily dependent events. The Lovász local lemma provides such a tool.
 
=== The local lemma ===
The local lemma is powerful tool for showing the possibility of rare event under limited dependencies. The structure of dependencies between a set of events is described by a '''dependency graph'''.
 
{{Theorem
|Definition|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events. A graph <math>D=(V,E)</math> on the set of vertices <math>V=\{1,2,\ldots,n\}</math> is called a '''dependency graph''' for the events <math>A_1,\ldots,A_n</math> if for each <math>i</math>, <math>1\le i\le n</math>, the event <math>A_i</math> is mutually independent of all the events <math>\{A_j\mid (i,j)\not\in E\}</math>.
}}
 
;Example
:Let <math>X_1,X_2,\ldots,X_m</math> be a set of ''mutually independent'' random variables. Each event <math>A_i</math> is a predicate defined on a number of variables among <math>X_1,X_2,\ldots,X_m</math>. Let <math>v(A_i)</math> be the unique smallest set of variables which determine <math>A_i</math>. The dependency graph <math>D=(V,E)</math> is defined by
:::<math>(i,j)\in E</math> iff <math>v(A_i)\cap v(A_j)\neq \emptyset</math>.
 
This construction gives a general framework for the probability spaces with limited dependencies and is central to the constructive proof of the Lovász local lemma. In this example, each event is a predicate of variables, and the events are dependent if they depend on some common events.
 
 
The following lemma, known as the Lovász local lemma, first proved by Erdős and Lovász in 1975, is an extremely powerful tool, as it supplies a way for dealing with rare events.
 
{{Theorem
|Theorem (The local lemma: general case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events. Suppose that <math>D=(V,E)</math> is a dependency graph for the events and suppose there are real numbers <math>x_1,x_2,\ldots, x_n</math> such that <math>0\le x_i<1</math> and for all <math>1\le i\le n</math>,
::<math>\Pr[A_i]\le x_i\prod_{(i,j)\in E}(1-x_j)</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]\ge\prod_{i=1}^n(1-x_i)</math>.
}}
 
The following is a special case, the symmetric version of the Lovász local lemma.
{{Theorem
|Theorem (The local lemma: symmetric case)|
:Let <math>A_1,A_2,\ldots,A_n</math> be a set of events, and assume that the following hold:
:#for all <math>1\le i\le n</math>, <math>\Pr[A_i]\le p</math>;
:#the maximum degree of the dependency graph for the events <math>A_1,A_2,\ldots,A_n</math> is <math>d</math>, and
:::<math>ep(d+1)\le 1</math>.
:Then
::<math>\Pr\left[\bigwedge_{i=1}^n\overline{A_i}\right]>0</math>.
}}

Latest revision as of 04:53, 7 October 2010

Counting arguments

Circuit complexity

This is a fundamental problem in in Computer Science.

A boolean function is a function in the form [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Boolean circuit is a mathematical model of computation. Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled [math]\displaystyle{ x_1, x_2, \ldots , x_n }[/math]. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).

Computations in Turing machines can be simulated by circuits, and any boolean function in P can be computed by a circuit with polynomially many gates. Thus, if we can find a function in NP that cannot be computed by any circuit with polynomially many gates, then NP[math]\displaystyle{ \neq }[/math]P.

The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.

Theorem (Shannon 1949)
There is a boolean function [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math] with circuit complexity greater than [math]\displaystyle{ \frac{2^n}{3n} }[/math].
Proof.

We first count the number of boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math]. There are [math]\displaystyle{ 2^{2^n} }[/math] boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Then we count the number of boolean circuit with fixed number of gates. Fix an integer [math]\displaystyle{ t }[/math], we count the number of circuits with [math]\displaystyle{ t }[/math] gates. By the De Morgan's laws, we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable [math]\displaystyle{ x_i }[/math], an inverted input variable [math]\displaystyle{ \neg x_i }[/math], or the output of another gate; thus, there are at most [math]\displaystyle{ 2+2n+t-1 }[/math] possible gate inputs. It follows that the number of circuits with [math]\displaystyle{ t }[/math] gates is at most [math]\displaystyle{ 2^t(t+2n+1)^{2t} }[/math].

If [math]\displaystyle{ t=2^n/3n }[/math], then

[math]\displaystyle{ \frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)\lt 1, }[/math] thus, [math]\displaystyle{ 2^t(t+2n+1)^{2t} \lt 2^{2^n}. }[/math]

Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function [math]\displaystyle{ f }[/math] which cannot be computed by any circuits with [math]\displaystyle{ 2^n/3n }[/math] gates.

[math]\displaystyle{ \square }[/math]

Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but almost all boolean functions have exponentially large circuit complexity.

Double counting

The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.

Handshaking lemma

The following lemma is a standard demonstration of double counting.

Handshaking Lemma
At a party, the number of guests who shake hands an odd number of times is even.

We model this scenario as an undirected graph [math]\displaystyle{ G(V,E) }[/math] with [math]\displaystyle{ |V|=n }[/math] standing for the [math]\displaystyle{ n }[/math] guests. There is an edge [math]\displaystyle{ uv\in E }[/math] if [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] shake hands. Let [math]\displaystyle{ d(v) }[/math] be the degree of vertex [math]\displaystyle{ v }[/math], which represents the number of times that [math]\displaystyle{ v }[/math] shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.

The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on Seven Bridges of Königsberg that began the study of graph theory.

Lemma (Euler 1736)
[math]\displaystyle{ \sum_{v\in V}d(v)=2|E| }[/math]
Proof.

We count the number of directed edges. A directed edge is an ordered pair [math]\displaystyle{ (u,v) }[/math] such that [math]\displaystyle{ \{u,v\}\in E }[/math]. There are two ways to count the directed edges.

First, we can enumerate by edges. Pick every edge [math]\displaystyle{ uv\in E }[/math] and apply two directions [math]\displaystyle{ (u,v) }[/math] and [math]\displaystyle{ (v,u) }[/math] to the edge. This gives us [math]\displaystyle{ 2|E| }[/math] directed edges.

On the other hand, we can enumerate by vertices. Pick every vertex [math]\displaystyle{ v\in V }[/math] and for each of its [math]\displaystyle{ d(v) }[/math] neighbors, say [math]\displaystyle{ u }[/math], generate a directed edge [math]\displaystyle{ (v,u) }[/math]. This gives us [math]\displaystyle{ \sum_{v\in V}d(v) }[/math] directed edges.

It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.

[math]\displaystyle{ \square }[/math]

The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.

Cayley's formula

We now present a theorem of the number of labeled trees on a fixed number of vertices. It is due to Cayley in 1889. The theorem is often referred by the name Cayley's formula.

Caylay's formula for trees
There are [math]\displaystyle{ n^{n-2} }[/math] different trees on [math]\displaystyle{ n }[/math] distinct vertices.

The theorem has several proofs. Classical methods include the bijection which encodes a tree by a Prüfer sequence, and through the Kirchhoff's matrix tree theorem. Here we present a proof by double counting, which is considered by the Proofs from THE BOOK "the most beautiful of them all".

Proof.
(Due to Pitman 1999)

Let [math]\displaystyle{ T_n }[/math] be the number of different trees defined on [math]\displaystyle{ n }[/math] distinct vertices.

A rooted tree is a tree with a special vertex. That is, one of the [math]\displaystyle{ n }[/math] vertices is marked as the "root" of the tree. A rooted tree defines a natural direction of all edges, such that an edge [math]\displaystyle{ uv }[/math] of the tree is directed from [math]\displaystyle{ u }[/math] to [math]\displaystyle{ v }[/math] if [math]\displaystyle{ u }[/math] is before [math]\displaystyle{ v }[/math] along the unique path from the root.

We count the number of different sequences of directed edges that can be added to an empty graph on [math]\displaystyle{ n }[/math] vertices to form from it a rooted tree. We note that such a sequence can be formed in two ways:

  1. Starting with an unrooted tree, choose one of its vertices as root, and fix an total order of edges to specify the order in which the edges are added.
  2. Starting from an empty graph, add the edges one by one in steps.

In the first method, we pick one of the [math]\displaystyle{ T_n }[/math] unrooted trees, choose one of the [math]\displaystyle{ n }[/math] vertices as the root, and pick one of the [math]\displaystyle{ (n-1)! }[/math] total orders of the [math]\displaystyle{ n-1 }[/math] edges. This gives us [math]\displaystyle{ T_nn(n-1)!=T_nn! }[/math] ways.

In the second method, we consider the number of choices in one step, and multiply the numbers of choices in all steps. This is done as follows.

Given a sequence of adding [math]\displaystyle{ n-1 }[/math] edges to an empty graph to form a rooted tree, we reverse this sequence and get a sequence of removing edges one by one from the final rooted tree until no edge left. We observe that:

  • At first, we remove an edge from the rooted tree. Suppose that the root of the tree is [math]\displaystyle{ r }[/math], and the removed directed edge is [math]\displaystyle{ (u,v) }[/math]. After removing [math]\displaystyle{ (u,v) }[/math], the original rooted tree is disconnected into two rooted trees, one rooted at [math]\displaystyle{ r }[/math] and the other rooted at [math]\displaystyle{ v }[/math].
  • After removing [math]\displaystyle{ k-1 }[/math] edges, there are [math]\displaystyle{ k }[/math] rooted trees. In the [math]\displaystyle{ k }[/math]th step, a directed edge [math]\displaystyle{ (u,v) }[/math] in the current forest is removed and the tree containing [math]\displaystyle{ (u,v) }[/math] is disconnected into two trees, one rooted at the old root of that tree, and the other rooted at [math]\displaystyle{ v }[/math].

We now again reverse the above procedure, and consider the sequence of adding directed edges to an empty graph to form a rooted tree.

  • At first, we have [math]\displaystyle{ n }[/math] rooted trees, each of 0 edge ([math]\displaystyle{ n }[/math] isolated vertices).
  • After adding [math]\displaystyle{ n-k }[/math] edges, there are [math]\displaystyle{ k }[/math] rooted trees. Denoting the directed edge added next as [math]\displaystyle{ (u,v) }[/math]. As observed above, [math]\displaystyle{ u }[/math] can be any one of the [math]\displaystyle{ n }[/math] vertices; but [math]\displaystyle{ v }[/math] must be the root of one of the [math]\displaystyle{ k }[/math] trees, except the tree which contains [math]\displaystyle{ u }[/math]. There are [math]\displaystyle{ n(k-1) }[/math] choices of such [math]\displaystyle{ (u,v) }[/math].

Multiplying the numbers of choices in all steps, the number of sequences of adding directed edges to an empty graph to form a rooted tree is given by

[math]\displaystyle{ \prod_{k=2}^nn(k-1)=n^{n-2}n! }[/math].

By the principle of double counting, counting the same thing by different methods yield the same result.

[math]\displaystyle{ T_nn!=n^{n-2}n! }[/math],

which gives that [math]\displaystyle{ T_n=n^{n-2} }[/math].

[math]\displaystyle{ \square }[/math]

The Pigeonhole Principle

The pigeonhole principle states the following "obvious" fact:

[math]\displaystyle{ n+1 }[/math] pigeons cannot sit in [math]\displaystyle{ n }[/math] holes so that every pigeon is alone in its hole.

More generally, the pigeonhole principle states as the following.

Generalized pigeonhole principle
If a set consisting of more than [math]\displaystyle{ mn }[/math] objects is partitioned into [math]\displaystyle{ n }[/math] classes, then some class receives more than [math]\displaystyle{ m }[/math] objects.

This is one of the oldest non-constructive principles: it states only the existence of a pigeonhole with more than [math]\displaystyle{ m }[/math] pigeons and says nothing about how to find such a pigeonhole.

Monotonic subsequences

Let [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] be a sequence of [math]\displaystyle{ n }[/math] distinct real numbers. A subsequence is a sequence of distinct terms of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] appearing in the same order in which they appear in [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math]. Formally, a subsequence of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is an [math]\displaystyle{ (a_{i_1},a_{i_2},\ldots,a_{i_k}) }[/math], with [math]\displaystyle{ i_1\lt i_2\lt \cdots\lt i_k }[/math].

A sequence [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is increasing if [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math], and decreasing if [math]\displaystyle{ a_1\gt a_2\gt \cdots\gt a_n }[/math].

We are interested in the longest increasing and decreasing subsequences of an [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math]. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. This is one of the first results in extremal combinatorics, published in the influential 1935 paper of Erdős and Szekeres.

Theorem (Erdős-Szekeres 1935)
A sequence of more than [math]\displaystyle{ mn }[/math] different real numbers must contain either an increasing subsequence of length [math]\displaystyle{ m+1 }[/math], or a decreasing subsequence of length [math]\displaystyle{ n+1 }[/math].
Proof.
(due to Seidenberg 1959)

Let [math]\displaystyle{ (a_1,a_2,\ldots,a_{N}) }[/math] be the original sequence of [math]\displaystyle{ N\gt mn }[/math] distinct real numbers. Associate each [math]\displaystyle{ a_i }[/math] a pair [math]\displaystyle{ (x_i,y_i) }[/math], defined as:

  • [math]\displaystyle{ x_i }[/math]: the length of the longest increasing subsequence ending at [math]\displaystyle{ a_i }[/math];
  • [math]\displaystyle{ y_i }[/math]: the length of the longest decreasing subsequence starting at [math]\displaystyle{ a_i }[/math].

A key observation is that [math]\displaystyle{ (x_i,y_i)\neq (x_j,y_j) }[/math] whenever [math]\displaystyle{ i\neq j }[/math]. This is proved as follows:

Case 1: If [math]\displaystyle{ a_i\lt a_j }[/math], then the longest increasing subsequence ending at [math]\displaystyle{ a_i }[/math] can be extended by adding on [math]\displaystyle{ a_j }[/math], so [math]\displaystyle{ x_i\lt x_j }[/math].
Case 2: If [math]\displaystyle{ a_i\gt a_j }[/math], then the longest decreasing subsequence starting at [math]\displaystyle{ a_j }[/math] can be preceded by [math]\displaystyle{ a_i }[/math], so [math]\displaystyle{ y_i\gt y_j }[/math].

Now we put [math]\displaystyle{ N }[/math] "pigeons" [math]\displaystyle{ a_1,a_2,\ldots,a_N }[/math] into "pigeonholes" [math]\displaystyle{ \{1,2,\ldots,N\}\times\{1,2,\ldots,N\} }[/math], such that [math]\displaystyle{ a_i }[/math] is put into hole [math]\displaystyle{ (x_i,y_i) }[/math], with at most one pigeon per each hole (since different [math]\displaystyle{ a_i }[/math] has different [math]\displaystyle{ (x_i,y_i) }[/math]).

The number of pigeons is [math]\displaystyle{ N\gt mn }[/math]. Due to pigeonhole principle, there must be a pigeon which is outside the region [math]\displaystyle{ \{1,2,\ldots,m\}\times\{1,2,\ldots,n\} }[/math], which implies that there exists an [math]\displaystyle{ a_i }[/math] with either [math]\displaystyle{ x_i\gt m }[/math] or [math]\displaystyle{ y_i\gt n }[/math]. Due to our definition of [math]\displaystyle{ (x_i,y_i) }[/math], there must be either an increasing subsequence of length [math]\displaystyle{ m+1 }[/math], or a decreasing subsequence of length [math]\displaystyle{ n+1 }[/math].

[math]\displaystyle{ \square }[/math]

Dirichlet's approximation

Let [math]\displaystyle{ x }[/math] be an irrational number. We now want to approximate [math]\displaystyle{ x }[/math] be a rational number (a fraction).

Since every real interval [math]\displaystyle{ [a,b] }[/math] with [math]\displaystyle{ a\lt b }[/math] contains infinitely many rational numbers, there must exist rational numbers arbitrarily close to [math]\displaystyle{ x }[/math]. The trick is to let the denominator of the fraction sufficiently large.

Suppose however we restrict the rationals we may select to have denominators bounded by [math]\displaystyle{ n }[/math]. How closely we can approximate [math]\displaystyle{ x }[/math] now?

The following important theorem is due to Dirichlet and his Schubfachprinzip ("drawer principle"). The theorem is fundamental in numer theory and real analysis, but the proof is combinatorial.

Theorem (Dirichlet 1879)
Let [math]\displaystyle{ x }[/math] be an irrational number. For any natural number [math]\displaystyle{ n }[/math], there is a rational number [math]\displaystyle{ \frac{p}{q} }[/math] such that [math]\displaystyle{ 1\le q\le n }[/math] and
[math]\displaystyle{ \left|x-\frac{p}{q}\right|\lt \frac{1}{nq} }[/math].
Proof.

Let [math]\displaystyle{ \{x\}=x-\lfloor x\rfloor }[/math] denote the fractional part of the real number [math]\displaystyle{ x }[/math]. It is obvious that [math]\displaystyle{ \{x\}\in[0,1) }[/math] for any real number [math]\displaystyle{ x }[/math].

Consider the [math]\displaystyle{ n+1 }[/math] numbers [math]\displaystyle{ \{kx\} }[/math], [math]\displaystyle{ k=1,2,\ldots,n+1 }[/math]. These [math]\displaystyle{ n+1 }[/math] numbers (pigeons) belong to the following [math]\displaystyle{ n }[/math] intervals (pigeonholes):

[math]\displaystyle{ \left(0,\frac{1}{n}\right),\left(\frac{1}{n},\frac{2}{n}\right),\ldots,\left(\frac{n-1}{n},1\right) }[/math].

Since [math]\displaystyle{ x }[/math] is irrational, [math]\displaystyle{ \{kx\} }[/math] cannot coincide with any endpoint of the above intervals.

By the pigeonhole principle, there exist [math]\displaystyle{ 1\le a\lt b\le n+1 }[/math], such that [math]\displaystyle{ \{ax\},\{bx\} }[/math] are in the same interval, thus

[math]\displaystyle{ |\{bx\}-\{ax\}|\lt \frac{1}{n} }[/math].

Therefore,

[math]\displaystyle{ |(b-a)x-\left(\lfloor bx\rfloor-\lfloor ax\rfloor\right)|\lt \frac{1}{n} }[/math].

Let [math]\displaystyle{ q=b-a }[/math] and [math]\displaystyle{ p=\lfloor bx\rfloor-\lfloor ax\rfloor }[/math]. We have [math]\displaystyle{ |qx-p|\lt \frac{1}{n} }[/math] and [math]\displaystyle{ 1\le q\le n }[/math]. Dividing both sides by [math]\displaystyle{ q }[/math], the theorem is proved.

[math]\displaystyle{ \square }[/math]

The Probabilistic Method

The probabilistic method provides another way of proving the existence of objects: instead of explicitly constructing an object, we define a probability space of objects in which the probability is positive that a randomly selected object has the required property.

The basic principle of the probabilistic method is very simple, and can be stated in intuitive ways:

  • If an object chosen randomly from a universe satisfies a property with positive probability, then there must be an object in the universe that satisfies that property.
For example, for a ball(the object) randomly chosen from a box(the universe) of balls, if the probability that the chosen ball is blue(the property) is >0, then there must be a blue ball in the box.
  • Any random variable assumes at least one value that is no smaller than its expectation, and at least one value that is no greater than the expectation.
For example, if we know the average height of the students in the class is [math]\displaystyle{ \ell }[/math], then we know there is a students whose height is at least [math]\displaystyle{ \ell }[/math], and there is a student whose height is at most [math]\displaystyle{ \ell }[/math].

Although the idea of the probabilistic method is simple, it provides us a powerful tool for existential proof.

Ramsey number

Recall the Ramsey theorem which states that in a meeting of at least six people, there are either three people knowing each other or three people not knowing each other. In graph theoretical terms, this means that no matter how we color the edges of [math]\displaystyle{ K_6 }[/math] (the complete graph on six vertices), there must be a monochromatic [math]\displaystyle{ K_3 }[/math] (a triangle whose edges have the same color).

Generally, the Ramsey number [math]\displaystyle{ R(k,\ell) }[/math] is the smallest integer [math]\displaystyle{ n }[/math] such that in any two-coloring of the edges of a complete graph on [math]\displaystyle{ n }[/math] vertices [math]\displaystyle{ K_n }[/math] by red and blue, either there is a red [math]\displaystyle{ K_k }[/math] or there is a blue [math]\displaystyle{ K_\ell }[/math].

Ramsey showed in 1929 that [math]\displaystyle{ R(k,\ell) }[/math] is finite for any [math]\displaystyle{ k }[/math] and [math]\displaystyle{ \ell }[/math]. It is extremely hard to compute the exact value of [math]\displaystyle{ R(k,\ell) }[/math]. Here we give a lower bound of [math]\displaystyle{ R(k,k) }[/math] by the probabilistic method.

Theorem (Erdős 1947)
If [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math] then it is possible to color the edges of [math]\displaystyle{ K_n }[/math] with two colors so that there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.
Proof.
Consider a random two-coloring of edges of [math]\displaystyle{ K_n }[/math] obtained as follows:
  • For each edge of [math]\displaystyle{ K_n }[/math], independently flip a fair coin to decide the color of the edge.

For any fixed set [math]\displaystyle{ S }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ \mathcal{E}_S }[/math] be the event that the [math]\displaystyle{ K_k }[/math] subgraph induced by [math]\displaystyle{ S }[/math] is monochromatic. There are [math]\displaystyle{ {k\choose 2} }[/math] many edges in [math]\displaystyle{ K_k }[/math], therefore

[math]\displaystyle{ \Pr[\mathcal{E}_S]=2\cdot 2^{-{k\choose 2}}=2^{1-{k\choose 2}}. }[/math]

Since there are [math]\displaystyle{ {n\choose k} }[/math] possible choices of [math]\displaystyle{ S }[/math], by the union bound

[math]\displaystyle{ \Pr[\exists S, \mathcal{E}_S]\le {n\choose k}\cdot\Pr[\mathcal{E}_S]={n\choose k}\cdot 2^{1-{k\choose 2}}. }[/math]

Due to the assumption, [math]\displaystyle{ {n\choose k}\cdot 2^{1-{k\choose 2}}\lt 1 }[/math], thus there exists a two coloring that none of [math]\displaystyle{ \mathcal{E}_S }[/math] occurs, which means there is no monochromatic [math]\displaystyle{ K_k }[/math] subgraph.

[math]\displaystyle{ \square }[/math]

For [math]\displaystyle{ k\ge 3 }[/math] and we take [math]\displaystyle{ n=\lfloor2^{k/2}\rfloor }[/math], then

[math]\displaystyle{ \begin{align} {n\choose k}\cdot 2^{1-{k\choose 2}} &\lt \frac{n^k}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &\le \frac{2^{k^2/2}}{k!}\cdot\frac{2^{1+\frac{k}{2}}}{2^{k^2/2}}\\ &= \frac{2^{1+\frac{k}{2}}}{k!}\\ &\lt 1. \end{align} }[/math]

By the above theorem, there exists a two-coloring of [math]\displaystyle{ K_n }[/math] that there is no monochromatic [math]\displaystyle{ K_k }[/math]. Therefore, the Ramsey number [math]\displaystyle{ R(k,k)\gt \lfloor2^{k/2}\rfloor }[/math] for all [math]\displaystyle{ k\ge 3 }[/math].

Tournament

A tournament (竞赛图) on a set [math]\displaystyle{ V }[/math] of [math]\displaystyle{ n }[/math] players is an orientation of the edges of the complete graph on the set of vertices [math]\displaystyle{ V }[/math]. Thus for every two distinct vertices [math]\displaystyle{ u,v }[/math] in [math]\displaystyle{ V }[/math], either [math]\displaystyle{ (u,v)\in E }[/math] or [math]\displaystyle{ (v,u)\in E }[/math], but not both.

We can think of the set [math]\displaystyle{ V }[/math] as a set of [math]\displaystyle{ n }[/math] players in which each pair participates in a single match, where [math]\displaystyle{ (u,v) }[/math] is in the tournament iff player [math]\displaystyle{ u }[/math] beats player [math]\displaystyle{ v }[/math].

Definition
We say that a tournament has [math]\displaystyle{ k }[/math]-paradoxical if for every set of [math]\displaystyle{ k }[/math] players there is a player who beats them all.

Is it true for every finite [math]\displaystyle{ k }[/math], there is a [math]\displaystyle{ k }[/math]-paradoxical tournament (on more than [math]\displaystyle{ k }[/math] vertices, of course)? This problem was first raised by Schütte, and as shown by Erdős, can be solved almost trivially by the probabilistic method.

Theorem (Erdős 1963)
If [math]\displaystyle{ {n\choose k}\left(1-2^{-k}\right)^{n-k}\lt 1 }[/math] then there is a tournament on [math]\displaystyle{ n }[/math] vertices that is [math]\displaystyle{ k }[/math]-paradoxical.
Proof.

Consider a uniformly random tournament [math]\displaystyle{ T }[/math] on the set [math]\displaystyle{ V=[n] }[/math]. For every fixed subset [math]\displaystyle{ S\in{V\choose k} }[/math] of [math]\displaystyle{ k }[/math] vertices, let [math]\displaystyle{ A_S }[/math] be the event defined as follows

[math]\displaystyle{ A_S:\, }[/math] there is no vertex in [math]\displaystyle{ V\setminus S }[/math] that beats all vertices in [math]\displaystyle{ S }[/math].

In a uniform random tournament, the orientations of edges are independent. For any [math]\displaystyle{ u\in V\setminus S }[/math],

[math]\displaystyle{ \Pr[u\mbox{ beats all }v\in S]=2^{-k} }[/math].

Therefore, [math]\displaystyle{ \Pr[u\mbox{ does not beats all }v\in S]=1-2^{-k} }[/math] and

[math]\displaystyle{ \Pr[A_S]=\prod_{u\in V\setminus S}\Pr[u\mbox{ does not beats all }v\in S]=(1-2^{-k})^{n-k} }[/math].

It follows that

[math]\displaystyle{ \Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\le \sum_{S\in{V\choose k}}\Pr[A_S]={n\choose k}(1-2^{-k})^{n-k}\lt 1. }[/math]

Therefore,

[math]\displaystyle{ \Pr[\,T\mbox{ is }k\mbox{-paradoxical }]=\Pr\left[\bigwedge_{S\in{V\choose k}}\overline{A_S}\right]=1-\Pr\left[\bigvee_{S\in{V\choose k}}A_S\right]\gt 0. }[/math]

There is a [math]\displaystyle{ k }[/math]-paradoxical tournament.

[math]\displaystyle{ \square }[/math]

Linearity of expectation

Let [math]\displaystyle{ X }[/math] be a discrete random variable. The expectation of [math]\displaystyle{ X }[/math] is defined as follows.

Definition (Expectation)
The expectation of a discrete random variable [math]\displaystyle{ X }[/math], denoted by [math]\displaystyle{ \mathbf{E}[X] }[/math], is given by
[math]\displaystyle{ \begin{align} \mathbf{E}[X] &= \sum_{x}x\Pr[X=x], \end{align} }[/math]
where the summation is over all values [math]\displaystyle{ x }[/math] in the range of [math]\displaystyle{ X }[/math].

A fundamental fact regarding the expectation is its linearity.

Theorem (Linearity of Expectations)
For any discrete random variables [math]\displaystyle{ X_1, X_2, \ldots, X_n }[/math], and any real constants [math]\displaystyle{ a_1, a_2, \ldots, a_n }[/math],
[math]\displaystyle{ \begin{align} \mathbf{E}\left[\sum_{i=1}^n a_iX_i\right] &= \sum_{i=1}^n a_i\cdot\mathbf{E}[X_i]. \end{align} }[/math]
Hamiltonian paths

The following result of Szele in 1943 is often considered the first use of the probabilistic method.

Theorem (Szele 1943)
There is a tournament on [math]\displaystyle{ n }[/math] players with at least [math]\displaystyle{ n!2^{-(n-1)} }[/math] Hamiltonian paths.
Proof.

Consider the uniform random tournament [math]\displaystyle{ T }[/math] on [math]\displaystyle{ [n] }[/math]. For any permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ [n] }[/math], let [math]\displaystyle{ X_{\pi} }[/math] be the indicator random variable defined as

[math]\displaystyle{ X_{\pi}=\begin{cases} 1 & \forall i\in[n-1], (\pi_i,\pi_{i+1})\in T,\\ 0 & \mbox{otherwise}. \end{cases} }[/math]

In other words, [math]\displaystyle{ X_{\pi} }[/math] indicates whether [math]\displaystyle{ \pi_0\rightarrow\pi_1\rightarrow\pi_2\rightarrow\cdots\rightarrow\pi_{n-1} }[/math] gives a Hamiltonian path. It holds that

[math]\displaystyle{ \mathrm{E}[X_\pi]=1\cdot\Pr[X_\pi=1]+0\cdot\Pr[X_\pi=0]=\Pr[\forall i\in[n-1], (\pi_i,\pi_{i+1})\in T]=2^{-(n-1)}. }[/math]

Let [math]\displaystyle{ X=\sum_{\pi:\text{permutation of }[n]}X_\pi\, }[/math]. Clearly [math]\displaystyle{ X }[/math] is the number of Hamiltonian paths in the tournament [math]\displaystyle{ T }[/math]. Due to the linearity of expectation,

[math]\displaystyle{ \mathrm{E}[X]=\mathrm{E}\left[\sum_{\pi:\text{permutation of }[n]}X_\pi\right]=\sum_{\pi:\text{permutation of }[n]}\mathrm{E}[X_\pi]=n!2^{-(n-1)}. }[/math]

This is the average number of Hamiltonian paths in a tournament, where the average is taken over all tournaments. Thus some tournament has at least [math]\displaystyle{ n!2^{-(n-1)} }[/math] Hamiltonian paths.

[math]\displaystyle{ \square }[/math]

Independent sets

An independent set of a graph is a set of vertices with no edges between them. The following theorem gives a lower bound on the size of the largest independent set.

Theorem
Let [math]\displaystyle{ G(V,E) }[/math] be a graph on [math]\displaystyle{ n }[/math] vertices with [math]\displaystyle{ m }[/math] edges. Then [math]\displaystyle{ G }[/math] has an independent set with at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.
Proof.
Let [math]\displaystyle{ S }[/math] be a set of vertices constructed as follows:
For each vertex [math]\displaystyle{ v\in V }[/math]:
  • [math]\displaystyle{ v }[/math] is included in [math]\displaystyle{ S }[/math] independently with probability [math]\displaystyle{ p }[/math],

[math]\displaystyle{ p }[/math] to be determined.

Let [math]\displaystyle{ X=|S| }[/math]. It is obvious that [math]\displaystyle{ \mathbf{E}[X]=np }[/math].

For each edge [math]\displaystyle{ e\in E }[/math], let [math]\displaystyle{ Y_{e} }[/math] be the random variable which indicates whether both endpoints of [math]\displaystyle{ }[/math] are in [math]\displaystyle{ S }[/math].

[math]\displaystyle{ \mathbf{E}[Y_{uv}]=\Pr[u\in S\wedge v\in S]=p^2. }[/math]

Let [math]\displaystyle{ Y }[/math] be the number of edges in the subgraph of [math]\displaystyle{ G }[/math] induced by [math]\displaystyle{ S }[/math]. It holds that [math]\displaystyle{ Y=\sum_{e\in E}Y_e }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[Y]=\sum_{e\in E}\mathbf{E}[Y_e]=mp^2 }[/math].

Note that although [math]\displaystyle{ S }[/math] is not necessary an independent set, it can be modified to one if for each edge [math]\displaystyle{ e }[/math] of the induced subgraph [math]\displaystyle{ G(S) }[/math], we delete one of the endpoint of [math]\displaystyle{ e }[/math] from [math]\displaystyle{ S }[/math]. Let [math]\displaystyle{ S^* }[/math] be the resulting set. It is obvious that [math]\displaystyle{ S^* }[/math] is an independent set since there is no edge left in the induced subgraph [math]\displaystyle{ G(S^*) }[/math].

Since there are [math]\displaystyle{ Y }[/math] edges in [math]\displaystyle{ G(S) }[/math], there are at most [math]\displaystyle{ Y }[/math] vertices in [math]\displaystyle{ S }[/math] are deleted to make it become [math]\displaystyle{ S^* }[/math]. Therefore, [math]\displaystyle{ |S^*|\ge X-Y }[/math]. By linearity of expectation,

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge\mathbf{E}[X-Y]=\mathbf{E}[X]-\mathbf{E}[Y]=np-mp^2. }[/math]

The expectation is maximized when [math]\displaystyle{ p=\frac{n}{2m} }[/math], thus

[math]\displaystyle{ \mathbf{E}[|S^*|]\ge n\cdot\frac{n}{2m}-m\left(\frac{n}{2m}\right)^2=\frac{n^2}{4m}. }[/math]

There exists an independent set which contains at least [math]\displaystyle{ \frac{n^2}{4m} }[/math] vertices.

[math]\displaystyle{ \square }[/math]

References

(声明: 资料受版权保护, 仅用于教学.)
(Disclaimer: The following copyrighted materials are meant for educational uses only.)