随机算法 (Spring 2014)/Martingales and 组合数学 (Spring 2014)/Existence problems: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "= Conditional Expectations = The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by :<math> \mathbf…")
 
imported>Etone
(Created page with "== Existence by Counting == === Shannon's circuit lower bound=== This is a fundamental problem in in Computer Science. A '''boolean function''' is a function in the form <math>f…")
 
Line 1: Line 1:
= Conditional Expectations =
== Existence by Counting ==
The '''conditional expectation''' of a random variable <math>Y</math> with respect to an event <math>\mathcal{E}</math> is defined by
=== Shannon's circuit lower bound===
:<math>
This is a fundamental problem in in Computer Science.
\mathbf{E}[Y\mid \mathcal{E}]=\sum_{y}y\Pr[Y=y\mid\mathcal{E}].
</math>
In particular, if the event <math>\mathcal{E}</math> is <math>X=a</math>, the conditional expectation
:<math>
\mathbf{E}[Y\mid X=a]
</math>
defines a function
:<math>
f(a)=\mathbf{E}[Y\mid X=a].
</math>
Thus, <math>\mathbf{E}[Y\mid X]</math> can be regarded as a random variable <math>f(X)</math>.
 
;Example
:Suppose that we uniformly sample a human from all human beings. Let <math>Y</math> be his/her height, and let <math>X</math> be the country where he/she is from. For any country <math>a</math>, <math>\mathbf{E}[Y\mid X=a]</math> gives the average height of that country. And <math>\mathbf{E}[Y\mid X]</math> is the random variable which can be defined in either ways:
:* We choose a human uniformly at random from all human beings, and <math>\mathbf{E}[Y\mid X]</math> is the average height of the country where he/she comes from.
:* We choose a country at random with a probability ''proportional to its population'', and <math>\mathbf{E}[Y\mid X]</math> is the average height of the chosen country.
 
The following proposition states some fundamental facts about conditional expectation.
 
{{Theorem
|Proposition (fundamental facts about conditional expectation)|
:Let <math>X,Y</math> and <math>Z</math> be arbitrary random variables. Let <math>f</math> and <math>g</math> be arbitrary functions. Then
:# <math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>.
:# <math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>.
:# <math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
}}
The proposition can be formally verified by computing these expectations. Although these equations look formal, the intuitive interpretations to them are very clear.
 
The first equation:
:<math>\mathbf{E}[X]=\mathbf{E}[\mathbf{E}[X\mid Y]]</math>
says that there are two ways to compute an average. Suppose again that <math>X</math> is the height of a uniform random human and <math>Y</math> is the country where he/she is from. There are two ways to compute the average human height: one is to directly average over the heights of all humans; the other is that first compute the average height for each country, and then average over these heights weighted by the populations of the countries.


The second equation:
A '''boolean function''' is a function in the form <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.
:<math>\mathbf{E}[X\mid Z]=\mathbf{E}[\mathbf{E}[X\mid Y,Z]\mid Z]</math>
is the same as the first one, restricted to a particular subspace. As the previous example, inaddition to the height <math>X</math> and the country <math>Y</math>, let <math>Z</math> be the gender of the individual. Thus, <math>\mathbf{E}[X\mid Z]</math> is the average height of a human being of a given sex. Again, this can be computed either directly or on a country-by-country basis.


The third equation:
[http://en.wikipedia.org/wiki/Boolean_circuit Boolean circuit] is a mathematical model of computation.
:<math>\mathbf{E}[\mathbf{E}[f(X)g(X,Y)\mid X]]=\mathbf{E}[f(X)\cdot \mathbf{E}[g(X,Y)\mid X]]</math>.
Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled <math>x_1, x_2, \ldots , x_n</math>. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).
looks obscure at the first glance, especially when considering that <math>X</math> and <math>Y</math> are not necessarily independent. Nevertheless, the equation follows the simple fact that conditioning on any <math>X=a</math>, the function value <math>f(X)=f(a)</math> becomes a constant, thus can be safely taken outside the expectation due to the linearity of expectation. For any value <math>X=a</math>,
:<math>
\mathbf{E}[f(X)g(X,Y)\mid X=a]=\mathbf{E}[f(a)g(X,Y)\mid X=a]=f(a)\cdot \mathbf{E}[g(X,Y)\mid X=a].
</math>


The proposition holds in more general cases when <math>X, Y</math> and <math>Z</math> are a sequence of random variables.
Computations in Turing machines can be simulated by circuits, and any boolean function in '''P''' can be computed by a circuit with polynomially many gates. Thus, if we can find a function in '''NP''' that cannot be computed by any circuit with polynomially many gates, then '''NP'''<math>\neq</math>'''P'''.


= Martingales =
The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.
"Martingale" originally refers to a betting strategy in which the gambler doubles his bet after every loss. Assuming unlimited wealth, this strategy is guaranteed to eventually have a positive net profit. For example, starting from an initial stake 1, after <math>n</math> losses, if the <math>(n+1)</math>th bet wins, then it gives a net profit of
:<math>
2^n-\sum_{i=1}^{n}2^{i-1}=1,
</math>
which is a positive number.
 
However, the assumption of unlimited wealth is unrealistic. For limited wealth, with geometrically increasing bet, it is very likely to end up bankrupt. You should never try this strategy in real life. And remember: <font color="red">gambling is bad!</font>
 
Suppose that the gambler is allowed to use any strategy. His stake on the next beting is decided based on the results of all the bettings so far. This gives us a highly dependent sequence of random variables <math>X_0,X_1,\ldots,</math>, where <math>X_0</math> is his initial capital, and <math>X_i</math> represents his capital after the <math>i</math>th betting. Up to different betting strategies, <math>X_i</math> can be arbitrarily dependent on <math>X_0,\ldots,X_{i-1}</math>. However, as long as the game is fair, namely, winning and losing with equal chances, conditioning on the past variables <math>X_0,\ldots,X_{i-1}</math>, we will expect no change in the value of the present variable <math>X_{i}</math> on average. Random variables satisfying this property is called a '''martingale''' sequence.


{{Theorem
{{Theorem
|Definition (martingale)|
|Theorem (Shannon 1949)|
:A sequence of random variables <math>X_0,X_1,\ldots</math> is a '''martingale''' if for all <math>i> 0</math>,
:There is a boolean function <math>f:\{0,1\}^n\rightarrow \{0,1\}</math> with circuit complexity greater than <math>\frac{2^n}{3n}</math>.
:: <math>\begin{align}
\mathbf{E}[X_{i}\mid X_0,\ldots,X_{i-1}]=X_{i-1}.
\end{align}</math>
}}
}}
{{Proof|
We first count the number of boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>. There are <math>2^{2^n}</math> boolean functions <math>f:\{0,1\}^n\rightarrow \{0,1\}</math>.


==Examples ==
Then we count the number of boolean circuit with fixed number of gates.
;coin flips
Fix an integer <math>t</math>, we count the number of circuits with <math>t</math> gates. By the [http://en.wikipedia.org/wiki/De_Morgan's_laws De Morgan's laws], we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable <math>x_i</math>, an inverted input variable <math>\neg x_i</math>, or the output of another gate; thus, there are at most <math>2+2n+t-1</math> possible gate inputs. It follows that the number of circuits with <math>t</math> gates is at most <math>2^t(t+2n+1)^{2t}</math>.
:A fair coin is flipped for a number of times. Let <math>Z_j\in\{-1,1\}</math> denote the outcome of the <math>j</math>th flip. Let
::<math>X_0=0\quad \mbox{ and } \quad X_i=\sum_{j\le i}Z_j</math>.
:The random variables <math>X_0,X_1,\ldots</math> defines a martingale.
;Proof
:We first observe that <math>\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}] = \mathbf{E}[X_i\mid X_{i-1}]</math>, which intuitively says that the next number of HEADs depends only on the current number of HEADs. This property is also called the '''Markov property''' in statistic processes.
::<math>
\begin{align}
\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]
&= \mathbf{E}[X_i\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}+Z_{i}\mid X_{i-1}]\\
&= \mathbf{E}[X_{i-1}\mid X_{i-1}]+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}\mid X_{i-1}]\\
&= X_{i-1}+\mathbf{E}[Z_{i}] &\quad (\mbox{independence of coin flips})\\
&= X_{i-1}
\end{align}
</math>


;Polya's urn scheme
If <math>t=2^n/3n</math>, then
: Consider an urn (just a container) that initially contains <math>b</math> balck balls and <math>w</math> white balls. At each step, we uniformly select a ball from the urn, and replace the ball with <math>c</math> balls of the same color. Let <math>X_0=b/(b+w)</math>, and <math>X_i</math> be the fraction of black balls in the urn after the <math>i</math>th step. The sequence <math>X_0,X_1,\ldots</math> is a martingale.
:<math>
\frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)<1,</math>     thus, <math>2^t(t+2n+1)^{2t} < 2^{2^n}.</math>


;edge exposure in a random graph
Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function <math>f</math> which cannot be computed by any circuits with <math>2^n/3n</math> gates.
:Consider a '''random graph''' <math>G</math> generated as follows. Let <math>[n]</math> be the set of vertices, and let <math>[m]={[n]\choose 2}</math> be the set of all possible edges. For convenience, we enumerate these potential edges by <math>e_1,\ldots, e_m</math>. For each potential edge <math>e_j</math>, we independently flip a fair coin to decide whether the edge <math>e_j</math> appears in <math>G</math>. Let <math>I_j</math> be the random variable that indicates whether <math>e_j\in G</math>. We are interested in some graph-theoretical parameter, say [http://mathworld.wolfram.com/ChromaticNumber.html chromatic number], of the random graph <math>G</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Let <math>X_0=\mathbf{E}[\chi(G)]</math>, and for each <math>i\ge 1</math>, let <math>X_i=\mathbf{E}[\chi(G)\mid I_1,\ldots,I_{i}]</math>, namely, the expected chromatic number of the random graph after fixing the first <math>i</math> edges. This process is called edges exposure of a random graph, as we "exposing" the edges one by one in a random grpah.
::[[File:Edge-exposure.png|360px]]
:As shown by the above figure, the sequence <math>X_0,X_1,\ldots,X_m</math> is a martingale. In particular, <math>X_0=\mathbf{E}[\chi(G)]</math>, and <math>X_m=\chi(G)</math>. The martingale <math>X_0,X_1,\ldots,X_m</math> moves from no information to full information (of the random graph <math>G</math>) in small steps.
 
It is nontrivial to formally verify that the edge exposure sequence for a random graph is a martingale. However, we will later see that this construction can be put into a more general context.
 
== Generalizations ==
The martingale can be generalized to be with respect to another sequence of random variables.
{{Theorem
|Definition (martingale, general version)|
:A sequence of random variables <math>Y_0,Y_1,\ldots</math> is a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> if, for all <math>i\ge 0</math>, the following conditions hold:
:* <math>Y_i</math> is a function of <math>X_0,X_1,\ldots,X_i</math>;
:* <math>\begin{align}
\mathbf{E}[Y_{i+1}\mid X_0,\ldots,X_{i}]=Y_{i}.
\end{align}</math>
}}
}}
Therefore, a sequence <math>X_0,X_1,\ldots</math> is a martingale if it is a martingale with respect to itself.


The purpose of this generalization is that we are usually more interested in a function of a sequence of random variables, rather than the sequence itself.
Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but ''almost all'' boolean functions have exponentially large circuit complexity.


=Azuma's Inequality=
=== Double counting ===
 
The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.
We introduce a martingale tail inequality, called Azuma's inequality.
==== Handshaking lemma ====
 
The following lemma is a standard demonstration of double counting.
{{Theorem
{{Theorem|Handshaking Lemma|
|Azuma's Inequality|
:At a party, the number of guests who shake hands an odd number of times is even.
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
::<math>
|X_{k}-X_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}
}}
Before formally proving this theorem, some comments are in order. First, unlike the Chernoff bounds, there is no assumption of independence. This shows the power of martingale inequalities.


Second, the condition that
We model this scenario as an undirected graph <math>G(V,E)</math> with <math>|V|=n</math> standing for the <math>n</math> guests. There is an edge <math>uv\in E</math> if <math>u</math> and <math>v</math> shake hands. Let <math>d(v)</math> be the degree of vertex <math>v</math>, which represents the number of times that <math>v</math> shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.
:<math>
|X_{k}-X_{k-1}|\le c_k
</math>
is central to the proof. This condition is sometimes called the '''bounded difference condition'''. If we think of the martingale <math>X_0,X_1,\ldots</math> as a process evolving through time, where <math>X_i</math> gives some measurement at time <math>i</math>, the bounded difference condition states that the process does not make big jumps. The Azuma's inequality says that if so, then it is unlikely that process wanders far from its starting point.


A special case is when the differences are bounded by a constant.  The following corollary is directly implied by the Azuma's inequality.  
The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on [http://en.wikipedia.org/wiki/Seven_Bridges_of_K%C3%B6nigsberg Seven Bridges of Königsberg] that began the study of graph theory.


{{Theorem
{{Theorem|Lemma (Euler 1736)|
|Corollary|
:<math>\sum_{v\in V}d(v)=2|E|</math>
:Let <math>X_0,X_1,\ldots</math> be a martingale such that, for all <math>k\ge 1</math>,
::<math>
|X_{k}-X_{k-1}|\le c,
</math>
:Then
::<math>\begin{align}
\Pr\left[|X_n-X_0|\ge ct\sqrt{n}\right]\le 2 e^{-t^2/2}.
\end{align}</math>
}}
}}
{{Proof|
We count the number of '''directed''' edges. A directed edge is an ordered pair <math>(u,v)</math> such that <math>\{u,v\}\in E</math>. There are two ways to count the directed edges.


This corollary states that for any martingale sequence whose diferences are bounded by a constant, the probability that it deviates <math>\omega(\sqrt{n})</math> far away from the starting point after <math>n</math> steps is bounded by <math>o(1)</math>.
First, we can enumerate by edges. Pick every edge <math>uv\in E</math> and apply two directions <math>(u,v)</math> and <math>(v,u)</math> to the edge. This gives us <math>2|E|</math> directed edges.


=== Generalization ===
On the other hand, we can enumerate by vertices. Pick every vertex <math>v\in V</math> and for each of its <math>d(v)</math> neighbors, say <math>u</math>, generate a directed edge <math>(v,u)</math>. This gives us <math>\sum_{v\in V}d(v)</math> directed edges.


Azuma's inequality can be generalized to a martingale with respect another sequence.
It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.
{{Theorem
|Azuma's Inequality (general version)|
:Let <math>Y_0,Y_1,\ldots</math> be a martingale with respect to the sequence <math>X_0,X_1,\ldots</math> such that, for all <math>k\ge 1</math>,
::<math>
|Y_{k}-Y_{k-1}|\le c_k,
</math>
:Then
::<math>\begin{align}
\Pr\left[|Y_n-Y_0|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{k=1}^nc_k^2}\right).
\end{align}</math>
}}
}}


=== The Proof of Azuma's Inueqality===
The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.
We will only give the formal proof of the non-generalized version. The proof of the general version is almost identical, with the only difference that we work on random sequence <math>Y_i</math> conditioning on sequence <math>X_i</math>.
==== Sperner's lemma ====
 
A '''triangulation''' of a triangle <math>abc</math> is a decomposition of <math>abc</math> to small triangles (called ''cells''), such that any two different cells are either disjoint, or share an edge, or a vertex.
The proof of Azuma's Inequality uses several ideas which are used in the proof of the Chernoff bounds. We first observe that the total deviation of the martingale sequence can be represented as the sum of deferences in every steps. Thus, as the Chernoff bounds, we are looking for a bound of the deviation of the sum of random variables. The strategy of the proof is almost the same as the proof of Chernoff bounds: we first apply Markov's inequality to the moment generating function, then we bound the moment generating function, and at last we optimize the parameter of the moment generating function. However, unlike the Chernoff bounds, the martingale differences are not independent any more. So we replace the use of the independence in the Chernoff bound by the martingale property. The proof is detailed as follows.
 
In order to bound the probability of <math>|X_n-X_0|\ge t</math>, we first bound the upper tail <math>\Pr[X_n-X_0\ge t]</math>. The bound of the lower tail can be symmetrically proved with the <math>X_i</math> replaced by <math>-X_i</math>.
 
==== Represent the deviation as the sum of differences ====
We define the '''martingale difference sequence''': for <math>i\ge 1</math>, let
:<math>
Y_i=X_i-X_{i-1}.
</math>
It holds that
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_0,\ldots,X_{i-1}]
&=\mathbf{E}[X_i-X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=\mathbf{E}[X_i\mid X_0,\ldots,X_{i-1}]-\mathbf{E}[X_{i-1}\mid X_0,\ldots,X_{i-1}]\\
&=X_{i-1}-X_{i-1}\\
&=0.
\end{align}
</math>
The second to the last equation is due to the fact that <math>X_0,X_1,\ldots</math> is a martingale and the definition of conditional expectation.
 
Let <math>Z_n</math> be the accumulated differences
:<math>
Z_n=\sum_{i=1}^n Y_i.
</math>
 
The deviation <math>(X_n-X_0)</math> can be computed by the accumulated differences:
:<math>
\begin{align}
X_n-X_0
&=(X_1-X_{0})+(X_2-X_1)+\cdots+(X_n-X_{n-1})\\
&=\sum_{i=1}^n Y_i\\
&=Z_n.
\end{align}
</math>
 
We then only need to upper bound the probability of the event <math>Z_n\ge t</math>.
 
==== Apply Markov's inequality to the moment generating function ====
The event <math>Z_n\ge t</math> is equivalent to that <math>e^{\lambda Z_n}\ge e^{\lambda t}</math> for <math>\lambda>0</math>. Apply Markov's inequality, we have
:<math>
\begin{align}
\Pr\left[Z_n\ge t\right]
&=\Pr\left[e^{\lambda Z_n}\ge e^{\lambda t}\right]\\
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}.
\end{align}
</math>
This is exactly the same as what we did to prove the Chernoff bound. Next, we need to bound the moment generating function <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>.


==== Bound the moment generating functions ====
A '''proper coloring''' of a triangulation of triangle <math>abc</math> is a coloring of all vertices in the triangulation with three colors: <font color=red>red</font>, <font color=blue>blue</font>, and <font color=green>green</font>, such that the following constraints are satisfied:
The moment generating function
* The three vertices <math>a,b</math>, and <math>c</math> of the big triangle receive all three colors.
:<math>
* The vertices in each of the three lines <math>ab</math>, <math>bc</math>, and <math>ac</math> receive two colors.  
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda (Z_{n-1}+Y_n)}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]
\end{align}
</math>
The first and the last equations are due to the fundamental facts about conditional expectation which are proved by us in the first section.


We then upper bound the <math>\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]</math> by a constant. To do so, we need the following technical lemma which is proved by the convexity of <math>e^{\lambda Y_n}</math>.
The following figure is an example of a properly colored triangulation.
[[Image:sperner-triangle.png|260px|center]]


{{Theorem
In 1928 young Emanuel Sperner gave a combinatorial proof of the famous Brouwer's fixed point theorem by proving the following lemma (now called Sperner's lemma), with an extremely elegant proof.
|Lemma|
{{Theorem|Sperner's Lemma (1928)|
:Let <math>X</math> be a random variable such that <math>\mathbf{E}[X]=0</math> and <math>|X|\le c</math>. Then for <math>\lambda>0</math>,
:For any properly colored triangulation, there exists a cell receiving all three colors.
::<math>
\mathbf{E}[e^{\lambda X}]\le e^{\lambda^2c^2/2}.
</math>
}}
}}
{{Proof| Observe that for <math>\lambda>0</math>, the function <math>e^{\lambda X}</math> of the variable <math>X</math> is convex in the interval <math>[-c,c]</math>. We draw a line between the two endpoints points <math>(-c, e^{-\lambda c})</math> and <math>(c, e^{\lambda c})</math>. The curve of <math>e^{\lambda X}</math> lies entirely below this line. Thus,
{{Proof|
:<math>
The proof is done by appropriately constructing a dual graph of the triangulation.  
\begin{align}
e^{\lambda X}
&\le \frac{c-X}{2c}e^{-\lambda c}+\frac{c+X}{2c}e^{\lambda c}\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c}).
\end{align}
</math>


Since <math>\mathbf{E}[X]=0</math>, we have
The dual graph is defined as follows:
:<math>
* Each cell in the triangulation corresponds to a distinct vertex in the dual graph.
\begin{align}
* The outer space corresponds to a distinct vertex in the dual graph.
\mathbf{E}[e^{\lambda X}]
* An edge is added between two vertices in the dual graph if the corresponding cells share a <math>{\color{Red}\mbox{red}}\mbox{--}{\color{Blue}\mbox{blue}}</math> edge.
&\le \mathbf{E}[\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{X}{2c}(e^{\lambda c}-e^{-\lambda c})]\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}+\frac{e^{\lambda c}-e^{-\lambda c}}{2c}\mathbf{E}[X]\\
&=\frac{e^{\lambda c}+e^{-\lambda c}}{2}.
\end{align}
</math>


By expanding both sides as Taylor's series, it can be verified that <math>\frac{e^{\lambda c}+e^{-\lambda c}}{2}\le e^{\lambda^2c^2/2}</math>.
The following is an example of the dual graph of a properly colored triangulation:
}}
[[Image:sperner-dual.png|260px|center]]


Apply the above lemma to the random variable
For vertices in the dual graph:
:<math>
* If a cell receives all three colors, the corresponding vertex in the dual graph has degree 1;
(Y_n \mid X_0,\ldots,X_{n-1})
* if a cell receives only <math>{\color{Red}\mbox{red}}</math> and <math>{\color{Blue}\mbox{blue}}</math>, the corresponding vertex has degree 2;
</math>
* for all other cases (the cell is monochromatic, or does not have blue or red), the corresponding vertex has degree 0.


We have already shown that its expectation
Besides, the unique vertex corresponding to the outer space must have odd degree, since the number of <math>{\color{Red}\mbox{red}}\mbox{--}{\color{Blue}\mbox{blue}}</math> transitions between a <math>{\color{Red}\mbox{red}}</math> endpoint and a <math>{\color{Blue}\mbox{blue}}</math> endpoint must be odd.
<math>
\mathbf{E}[(Y_n \mid X_0,\ldots,X_{n-1})]=0,
</math>
and by the bounded difference condition of Azuma's inequality, we have
<math>
|Y_n|=|(X_n-X_{n-1})|\le c_n.
</math>
Thus, due to the above lemma, it holds that
:<math>
\mathbf{E}[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}]\le e^{\lambda^2c_n^2/2}.
</math>
 
Back to our analysis of the expectation <math>\mathbf{E}\left[e^{\lambda Z_n}\right]</math>, we have
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&=\mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot\mathbf{E}\left[e^{\lambda Y_n}\mid X_0,\ldots,X_{n-1}\right]\right]\\
&\le \mathbf{E}\left[e^{\lambda Z_{n-1}}\cdot e^{\lambda^2c_n^2/2}\right]\\
&= e^{\lambda^2c_n^2/2}\cdot\mathbf{E}\left[e^{\lambda Z_{n-1}}\right] .
\end{align}
</math>


Apply the same analysis to <math>\mathbf{E}\left[e^{\lambda Z_{n-1}}\right]</math>, we can solve the above recursion by
By handshaking lemma, the number of odd-degree vertices in the dual graph is even, thus the number of cells receiving all three colors must be odd, which cannot be zero.
:<math>
\begin{align}
\mathbf{E}\left[e^{\lambda Z_n}\right]
&\le \prod_{k=1}^n e^{\lambda^2c_k^2/2}\\
&= \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2\right).
\end{align}
</math>
 
Go back to the Markov's inequality,
:<math>
\begin{align}
\Pr\left[Z_n\ge t\right]
&\le \frac{\mathbf{E}\left[e^{\lambda Z_n}\right]}{e^{\lambda t}}\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right).
\end{align}
</math>
 
We then only need to choose a proper <math>\lambda>0</math>.
 
==== Optimization ====
By choosing <math>\lambda=\frac{t}{\sum_{k=1}^n c_k^2}</math>, we have that
:<math>
\exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)=\exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
</math>
Thus, the probability
:<math>
\begin{align}
\Pr\left[X_n-X_0\ge t\right]
&=\Pr\left[Z_n\ge t\right]\\
&\le \exp\left(\lambda^2\sum_{k=1}^n c_k^2/2-\lambda t\right)\\
&= \exp\left(-\frac{t^2}{2\sum_{k=1}^n c_k^2}\right).
\end{align}
</math>
The upper tail of Azuma's inequality is proved. By replacing <math>X_i</math> by <math>-X_i</math>, the lower tail can be treated just as the upper tail. Applying the union bound, Azuma's inequality is proved.
 
=The Doob martingales =
The following definition describes a very general approach for constructing an important type of martingales.
 
{{Theorem
|Definition (The Doob sequence)|
: The Doob sequence of a function <math>f</math> with respect to a sequence of random variables <math>X_1,\ldots,X_n</math> is defined by
::<math>
Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}], \quad 0\le i\le n.
</math>
:In particular, <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and <math>Y_n=f(X_1,\ldots,X_n)</math>.
}}
}}


The Doob sequence of a function defines a martingale. That is
== The Pigeonhole Principle ==
::<math>
The '''pigeonhole principle''' states the following "obvious" fact:
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]=Y_{i-1},
:''<math>n+1</math> pigeons cannot sit in <math>n</math> holes so that every pigeon is alone in its hole.''
</math>
This is one of the oldest '''non-constructive''' principles: it states only the ''existence'' of a pigeonhole with more than one pigeons and says nothing about how to ''find'' such a pigeonhole.
for any <math>0\le i\le n</math>.
 
To prove this claim, we recall the definition that <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]</math>, thus,
:<math>
\begin{align}
\mathbf{E}[Y_i\mid X_1,\ldots,X_{i-1}]
&=\mathbf{E}[\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i}]\mid X_1,\ldots,X_{i-1}]\\
&=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_{i-1}]\\
&=Y_{i-1},
\end{align}
</math>
where the second equation is due to the fundamental fact about conditional expectation introduced in the first section.
 
The Doob martingale describes a very natural procedure to determine a function value of a sequence of random variables. Suppose that we want to predict the value of a function <math>f(X_1,\ldots,X_n)</math> of random variables <math>X_1,\ldots,X_n</math>. The Doob sequence <math>Y_0,Y_1,\ldots,Y_n</math> represents a sequence of refined estimates of the value of <math>f(X_1,\ldots,X_n)</math>, gradually using more information on the values of the random variables <math>X_1,\ldots,X_n</math>. The first element <math>Y_0</math> is just the expectation of <math>f(X_1,\ldots,X_n)</math>. Element <math>Y_i</math> is the expected value of <math>f(X_1,\ldots,X_n)</math> when the values of <math>X_1,\ldots,X_{i}</math> are known, and <math>Y_n=f(X_1,\ldots,X_n)</math> when <math>f(X_1,\ldots,X_n)</math> is fully determined by <math>X_1,\ldots,X_n</math>.
 
The following two Doob martingales arise in evaluating the parameters of random graphs.
 
;edge exposure martingale
:Let <math>G</math> be a random graph on <math>n</math> vertices. Let <math>f</math> be a real-valued function of graphs, such as, chromatic number, number of triangles, the size of the largest clique or independent set, etc. Denote that <math>m={n\choose 2}</math>. Fix an arbitrary numbering of potential edges between the <math>n</math> vertices, and denote the edges as <math>e_1,\ldots,e_m</math>. Let
::<math>
X_i=\begin{cases}
1& \mbox{if }e_i\in G,\\
0& \mbox{otherwise}.
\end{cases}
</math>
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,m</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''edge exposure martingale'''.
 
;vertex exposure martingale
: Instead of revealing edges one at a time, we could reveal the set of edges connected to a given vertex, one vertex at a time. Suppose that the vertex set is <math>[n]</math>. Let <math>X_i</math> be the subgraph of <math>G</math> induced by the vertex set <math>[i]</math>, i.e. the first <math>i</math> vertices.
:Let <math>Y_0=\mathbf{E}[f(G)]</math> and for <math>i=1,\ldots,n</math>, let <math>Y_i=\mathbf{E}[f(G)\mid X_1,\ldots,X_i]</math>.
:The sequence <math>Y_0,Y_1,\ldots,Y_n</math> gives a Doob martingale that is commonly called the '''vertex exposure martingale'''.
 
===Chromatic number===
The random graph <math>G(n,p)</math> is the graph on <math>n</math> vertices <math>[n]</math>, obtained by selecting each pair of vertices to be an edge, randomly and independently, with probability <math>p</math>. We denote <math>G\sim G(n,p)</math> if <math>G</math> is generated in this way.


{{Theorem
The general form of pigeonhole principle, also known as the '''averaging principle''', is stated as follows.
|Theorem [Shamir and Spencer (1987)]|
{{Theorem|Generalized pigeonhole principle|
:Let <math>G\sim G(n,p)</math>. Let <math>\chi(G)</math> be the chromatic number of <math>G</math>. Then
:If a set consisting of more than <math>mn</math> objects is partitioned into <math>n</math> classes, then some class receives more than <math>m</math> objects.
::<math>\begin{align}
\Pr\left[|\chi(G)-\mathbf{E}[\chi(G)]|\ge t\sqrt{n}\right]\le 2e^{-t^2/2}.
\end{align}</math>
}}
{{Proof| Consider the vertex exposure martingale
:<math>
Y_i=\mathbf{E}[\chi(G)\mid X_1,\ldots,X_i]
</math>
where each <math>X_k</math> exposes the induced subgraph of <math>G</math> on vertex set <math>[k]</math>. A single vertex can always be given a new color so that the graph is properly colored, thus the bounded difference condition
:<math>
|Y_i-Y_{i-1}|\le 1
</math>
is satisfied. Now apply the Azuma's inequality for the martingale <math>Y_1,\ldots,Y_n</math> with respect to <math>X_1,\ldots,X_n</math>.
}}
}}


For <math>t=\omega(1)</math>, the theorem states that the chromatic number of a random graph is tightly concentrated around its mean. The proof gives no clue as to where the mean is. This actually shows how powerful the martingale inequalities are: we can prove that a distribution is concentrated to its expectation without actually knowing the expectation.  
=== Inevitable divisors ===
The following is one of Erdős' favorite initiation questions to mathematics. The proof uses the Pigeonhole Principle.


=== Hoeffding's Inequality===
{{Theorem|Theorem|
The following theorem states the so-called Hoeffding's inequality. It is a generalized version of the Chernoff bounds. Recall that the Chernoff bounds hold for the sum of independent ''trials''. When the random variables are not trials, the Hoeffding's inequality is useful, since it holds for the sum of any independent random variables whose ranges are bounded.
:For any subset <math>S\subseteq\{1,2,\ldots,2n\}</math> of size <math>|S|>n\,</math>, there are two numbers <math>a,b\in S</math> such that <math>a|b\,</math>.
{{Theorem
|Hoeffding's inequality|
: Let <math>X=\sum_{i=1}^nX_i</math>, where <math>X_1,\ldots,X_n</math> are independent random variables with <math>a_i\le X_i\le b_i</math> for each <math>1\le i\le n</math>. Let <math>\mu=\mathbf{E}[X]</math>. Then
::<math>
\Pr[|X-\mu|\ge t]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^n(b_i-a_i)^2}\right).
</math>
}}
}}
{{Proof| Define the Doob martingale sequence <math>Y_i=\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_1,\ldots,X_{i}\right]</math>. Obviously <math>Y_0=\mu</math> and <math>Y_n=X</math>.
{{Proof|
For every odd number <math>m\in\{1,2,\ldots,2n\}</math>, let
:<math>C_m=\{2^km\mid k\ge 0, 2^km\le 2n\}</math>.
It is easy to see that for any <math>b<a</math> from the same <math>C_m</math>, it holds that <math>a|b</math>.


:<math>
Every number <math>a\in S</math> can be uniquely represented as <math>a=2^km</math> for some odd number <math>m</math>, thus belongs to exactly one of <math>C_m</math>, for odd <math>m\in\{1,2,\ldots, 2n\}</math>.  There are <math>n</math> odd numbers in <math>\{1,2,\ldots,2n\}</math>, thus <math>n</math> different <math>C_m</math>, but <math>|S|>n</math>, thus there must exist distinct <math>a,b\in S</math>, supposed that <math>b<a</math>, belonging to the same <math>C_m</math>, which implies that <math>a|b</math>.
\begin{align}
|Y_i-Y_{i-1}|
&=
\left|\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i}\right]-\mathbf{E}\left[\sum_{j=1}^n X_j\,\Big|\, X_0,\ldots,X_{i-1}\right]\right|\\
&=\left|\sum_{j=1}^i X_i+\sum_{j=i+1}^n\mathbf{E}[X_j]-\sum_{j=1}^{i-1} X_i-\sum_{j=i}^n\mathbf{E}[X_j]\right|\\
&=\left|X_i-\mathbf{E}[X_{i}]\right|\\
&\le b_i-a_i
\end{align}
</math>
Apply Azuma's inequality for the martingale <math>Y_0,\ldots,Y_n</math> with respect to <math>X_1,\ldots, X_n</math>,  the Hoeffding's inequality is proved.
}}
}}


=== Monotonic subsequences ===
Let <math>(a_1,a_2,\ldots,a_n)</math> be a sequence of <math>n</math> distinct real numbers. A '''subsequence''' is a sequence of distinct terms of <math>(a_1,a_2,\ldots,a_n)</math> appearing in the same order in which they appear in <math>(a_1,a_2,\ldots,a_n)</math>. Formally, a subsequence of <math>(a_1,a_2,\ldots,a_n)</math> is an <math>(a_{i_1},a_{i_2},\ldots,a_{i_k})</math>, with <math>i_1<i_2<\cdots<i_k</math>.


=The Bounded Difference Method=
A sequence <math>(a_1,a_2,\ldots,a_n)</math> is '''increasing''' if <math>a_1<a_2<\cdots<a_n</math>, and '''decreasing''' if <math>a_1>a_2>\cdots>a_n</math>.
Combining Azuma's inequality with the construction of Doob martingales, we have the powerful ''Bounded Difference Method'' for concentration of measures.


== For arbitrary random variables ==
We are interested in the ''longest'' increasing and decreasing subsequences of an <math>a_1<a_2<\cdots<a_n</math>. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. This is one of the first results in extremal combinatorics, published in the influential 1935 paper of Erdős and Szekeres.
Given a sequence of random variables <math>X_1,\ldots,X_n</math> and a function <math>f</math>. The Doob sequence constructs a martingale from them. Combining this construction with Azuma's inequality, we can get a very powerful theorem called "the method of averaged bounded differences" which bounds the concentration for arbitrary function on arbitrary random variables (not necessarily a martingale).


{{Theorem
{{Theorem|Theorem (Erdős-Szekeres 1935)|
|Theorem (Method of averaged bounded differences)|
:A sequence of more than <math>mn</math> different real numbers must contain either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be arbitrary random variables and let <math>f</math> be a function of <math>X_0,X_1,\ldots, X_n</math> satisfying that, for all <math>1\le i\le n</math>,
::<math>
|\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_i]-\mathbf{E}[f(\boldsymbol{X})\mid X_1,\ldots,X_{i-1}]|\le c_i,
</math>
:Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}
}}
{{Proof| Define the Doob Martingale sequence <math>Y_0,Y_1,\ldots,Y_n</math> by setting <math>Y_0=\mathbf{E}[f(X_1,\ldots,X_n)]</math> and, for <math>1\le i\le n</math>, <math>Y_i=\mathbf{E}[f(X_1,\ldots,X_n)\mid X_1,\ldots,X_i]</math>. Then the above theorem is a restatement of the Azuma's inequality holding for <math>Y_0,Y_1,\ldots,Y_n</math>.
{{Proof|(due to Seidenberg 1959)
}}
Let <math>(a_1,a_2,\ldots,a_{N})</math> be the original sequence of <math>N>mn</math> distinct real numbers. Associate each <math>a_i</math> a pair <math>(x_i,y_i)</math>, defined as:
 
*<math>x_i</math>: the length of the longest ''increasing'' subsequence ''ending'' at <math>a_i</math>;
== For independent random variables ==
*<math>y_i</math>: the length of the longest ''decreasing'' subsequence ''starting'' at <math>a_i</math>.
The condition of bounded averaged differences is usually hard to check. This severely limits the usefulness of the method. To overcome this, we introduce a property which is much easier to check, called the Lipschitz condition.
A key observation is that <math>(x_i,y_i)\neq (x_j,y_j)</math> whenever <math>i\neq j</math>. This is proved as follows:
: '''Case 1:''' If <math>a_i<a_j</math>, then the longest increasing subsequence ending at <math>a_i</math> can be extended by adding on <math>a_j</math>, so <math>x_i<x_j</math>.
: '''Case 2:'''  If <math>a_i>a_j</math>, then the longest decreasing subsequence starting at <math>a_j</math> can be preceded by <math>a_i</math>, so <math>y_i>y_j</math>.
Now we put <math>N</math> "pigeons" <math>a_1,a_2,\ldots,a_N</math> into "pigeonholes" <math>\{1,2,\ldots,N\}\times\{1,2,\ldots,N\}</math>, such that <math>a_i</math> is put into hole <math>(x_i,y_i)</math>, with at most one pigeon per each hole (since different <math>a_i</math> has different <math>(x_i,y_i)</math>).  


{{Theorem
The number of pigeons is <math>N>mn</math>. Due to pigeonhole principle, there must be a pigeon which is outside the region <math>\{1,2,\ldots,m\}\times\{1,2,\ldots,n\}</math>, which implies that there exists an <math>a_i</math> with either <math>x_i>m</math> or <math>y_i>n</math>. Due to our definition of <math>(x_i,y_i)</math>, there must be either an increasing subsequence of length <math>m+1</math>, or a decreasing subsequence of length <math>n+1</math>.
|Definition (Lipschitz condition)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le 1.
\end{align}</math>
}}
}}
In other words, the function satisfies the Lipschitz condition if an arbitrary change in the value of any one argument does not change the value of the function by more than 1.


The diference of 1 can be replaced by arbitrary constants, which gives a generalized version of Lipschitz condition.
=== Dirichlet's approximation ===
{{Theorem
Let <math>x</math> be an irrational number. We now want to approximate <math>x</math> be a rational number (a fraction).
|Definition (Lipschitz condition, general version)|
:A function <math>f(x_1,\ldots,x_n)</math> satisfies the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, if for any <math>x_1,\ldots,x_n</math> and any <math>y_i</math>,
::<math>\begin{align}
|f(x_1,\ldots,x_{i-1},x_i,x_{i+1},\ldots,x_n)-f(x_1,\ldots,x_{i-1},y_i,x_{i+1},\ldots,x_n)|\le c_i.
\end{align}</math>
}}


The following "method of bounded differences" can be developed for functions satisfying the Lipschitz condition. Unfortunately, in order to imply the condition of averaged bounded differences from the Lipschitz condition, we have to restrict the method to independent random variables.
Since every real interval <math>[a,b]</math> with <math>a<b</math> contains infinitely many rational numbers, there must exist rational numbers arbitrarily close to <math>x</math>. The trick is to let the denominator of the fraction sufficiently large.
{{Theorem
|Corollary (Method of bounded differences)|
:Let <math>\boldsymbol{X}=(X_1,\ldots, X_n)</math> be <math>n</math> '''independent''' random variables and let <math>f</math> be a function satisfying the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>. Then
::<math>\begin{align}
\Pr\left[|f(\boldsymbol{X})-\mathbf{E}[f(\boldsymbol{X})]|\ge t\right]\le 2\exp\left(-\frac{t^2}{2\sum_{i=1}^nc_i^2}\right).
\end{align}</math>
}}


{{Proof| For convenience, we denote that <math>\boldsymbol{X}_{[i,j]}=(X_i,X_{i+1},\ldots, X_j)</math> for any <math>1\le i\le j\le n</math>.
Suppose however we restrict the rationals we may select to have denominators bounded by <math>n</math>. How closely we can approximate <math>x</math> now?


We first show that the Lipschitz condition with constants <math>c_i</math>, <math>1\le i\le n</math>, implies another condition called the averaged Lipschitz condition (ALC): for any <math>a_i,b_i</math>, <math>1\le i\le n</math>,
The following important theorem is due to Dirichlet and his ''Schubfachprinzip'' ("drawer principle"). The theorem is fundamental in numer theory and real analysis, but the proof is combinatorial.
:<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\le c_i.
</math>
And this condition implies the averaged bounded difference condition: for all <math>1\le i\le n</math>,
::<math>
\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\le c_i.
</math>
Then by applying the method of averaged bounded differences, the corollary can be proved.


For any <math>a</math>, by the law of total expectation,
{{Theorem|Theorem (Dirichlet 1879)|
:<math>
:Let <math>x</math> be an irrational number. For any natural number <math>n</math>, there is a rational number <math>\frac{p}{q}</math> such that <math>1\le q\le n</math> and  
\begin{align}
::<math>\left|x-\frac{p}{q}\right|<\frac{1}{nq}</math>.
&\quad\, \mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
}}
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\\
{{Proof|
&=\sum_{a_{i+1},\ldots,a_n}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a, \boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{independence})\\
Let <math>\{x\}=x-\lfloor x\rfloor</math> denote the '''fractional part''' of the real number <math>x</math>. It is obvious that <math>\{x\}\in[0,1)</math> for any real number <math>x</math>.
&= \sum_{a_{i+1},\ldots,a_n} f(\boldsymbol{X}_{[1,i-1]},a,\boldsymbol{a}_{[i+1,n]})\cdot\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right].
\end{align}
</math>
 
Let <math>a=a_i</math> and <math>b_i</math>, and take the diference. Then
:<math>
\begin{align}
&\quad\, \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a_i\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=b_i\right]\right|\\
&=\left|\sum_{a_{i+1},\ldots,a_n}\left(f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right)\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\right|\\
&\le \sum_{a_{i+1},\ldots,a_n}\left|f(\boldsymbol{X}_{[1,i-1]},a_i,\boldsymbol{a}_{[i+1,n]})-f(\boldsymbol{X}_{[1,i-1]},b_i,\boldsymbol{a}_{[i+1,n]})\right|\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right]\\
&\le \sum_{a_{i+1},\ldots,a_n}c_i\Pr\left[\boldsymbol{X}_{[i+1,n]}=\boldsymbol{a}_{[i+1,n]}\right] \qquad (\mbox{Lipschitz condition})\\
&=c_i.
\end{align}
</math>
 
Thus, the Lipschitz condition is transformed to the ALC. We then deduce the averaged bounded difference condition from ALC.
 
By the law of total expectation,
:<math>
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\cdot\Pr[X_i=a\mid \boldsymbol{X}_{[1,i-1]}].
</math>


We can trivially write <math>\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]</math> as
Consider the <math>n+1</math> numbers <math>\{kx\}</math>, <math>k=1,2,\ldots,n+1</math>. These <math>n+1</math> numbers (pigeons) belong to the following <math>n</math> intervals (pigeonholes):
:<math>
:<math>\left(0,\frac{1}{n}\right),\left(\frac{1}{n},\frac{2}{n}\right),\ldots,\left(\frac{n-1}{n},1\right)</math>.
\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]=\sum_{a}\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right].
Since <math>x</math> is irrational, <math>\{kx\}</math> cannot coincide with any endpoint of the above intervals.
</math>


Hence, the difference is
By the pigeonhole principle, there exist <math>1\le a<b\le n+1</math>, such that <math>\{ax\},\{bx\}</math> are in the same interval, thus
:<math>
:<math>|\{bx\}-\{ax\}|<\frac{1}{n}</math>.
\begin{align}
Therefore,
&\quad \left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]}\right]\right|\\
:<math>|(b-a)x-\left(\lfloor bx\rfloor-\lfloor ax\rfloor\right)|<\frac{1}{n}</math>.
&=\left|\sum_{a}\left(\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right)\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right]\right| \\
Let <math>q=b-a</math> and <math>p=\lfloor bx\rfloor-\lfloor ax\rfloor</math>. We have <math>|qx-p|<\frac{1}{n}</math> and <math>1\le q\le n</math>. Dividing both sides by <math>q</math>, the theorem is proved.
&\le \sum_{a}\left|\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i]}\right]-\mathbf{E}\left[f(\boldsymbol{X})\mid \boldsymbol{X}_{[1,i-1]},X_i=a\right]\right|\cdot\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \\
&\le \sum_a c_i\Pr\left[X_i=a\mid \boldsymbol{X}_{[1,i-1]}\right] \qquad (\mbox{due to ALC})\\
&=c_i.
\end{align}
</math>
 
The averaged bounded diference condition is implied. Applying the method of averaged bounded diferences, the corollary follows.
}}
}}
== Applications ==
=== Occupancy problem ===
Throwing <math>m</math> balls uniformly and independently at random to <math>n</math> bins, we ask for the occupancies of bins by the balls. In particular, we are interested in the number of empty bins.
This problem can be described equivalently as follows. Let <math>f:[m]\rightarrow[n]</math> be a uniform random function from <math>[m]\rightarrow[n]</math>. We ask for the number of <math>i\in[n]</math> that <math>f^{-1}(i)</math> is empty.
For any <math>i\in[n]</math>, let <math>X_i</math> indicate the emptiness of bin <math>i</math>. Let <math>X=\sum_{i=1}^nX_i</math> be the number of empty bins.
:<math>
\mathbf{E}[X_i]=\Pr[\mbox{bin }i\mbox{ is empty}]=\left(1-\frac{1}{n}\right)^m.
</math>
By the linearity of expectation,
:<math>
\mathbf{E}[X]=\sum_{i=1}^n\mathbf{E}[X_i]=n\left(1-\frac{1}{n}\right)^m.
</math>
We want to know how <math>X</math> deviates from this expectation. The complication here is that <math>X_i</math> are not independent. So we alternatively look at a sequence of independent random variables <math>Y_1,\ldots, Y_m</math>, where <math>Y_j\in[n]</math> represents the bin into which the <math>j</math>th ball falls. Clearly <math>X</math> is function of <math>Y_1,\ldots, Y_m</math>.
We than observe that changing the value of any <math>Y_i</math> can change the value of <math>X</math> by at most 1, because one ball can affect the emptiness of at most one bin.
Thus as a function of independent random variables <math>Y_1,\ldots, Y_m</math>, <math>X</math> satisfies the Lipschitz condition. Apply the method of bounded differences, it holds that
:<math>
\Pr\left[\left|X-n\left(1-\frac{1}{n}\right)^m\right|\ge t\sqrt{m}\right]=\Pr[|X-\mathbf{E}[X]|\ge t\sqrt{m}]\le 2e^{-t^2/2}
</math>
Thus, for sufficiently large <math>n</math> and <math>m</math>, the number of empty bins is tightly concentrated around <math>n\left(1-\frac{1}{n}\right)^m\approx \frac{n}{e^{m/n}}</math>
=== Pattern Matching ===
Let <math>\boldsymbol{X}=(X_1,\ldots,X_n)</math> be a sequence of characters chosen independently and uniformly at random from an alphabet <math>\Sigma</math>, where <math>m=|\Sigma|</math>. Let <math>\pi\in\Sigma^k</math> be an arbitrarily fixed string of <math>k</math> characters from <math>\Sigma</math>, called a ''pattern''. Let <math>Y</math> be the number of occurrences of the pattern <math>\pi</math> as a substring of the random string <math>X</math>.
By the linearity of expectation, it is obvious that
:<math>
\mathbf{E}[Y]=(n-k+1)\left(\frac{1}{m}\right)^k.
</math>
We now look at the concentration of <math>Y</math>. The complication again lies in the dependencies between the matches. Yet we will see that <math>Y</math> is well tightly concentrated around its expectation if <math>k</math> is relatively small compared to <math>n</math>.
For a fixed pattern <math>\pi</math>, the random variable <math>Y</math> is a function of the independent random variables <math>(X_1,\ldots,X_n)</math>. Any character <math>X_i</math> participates in no more than <math>k</math> matches, thus changing the value of any <math>X_i</math> can affect the value of <math>Y</math> by at most <math>k</math>. <math>Y</math> satisfies the Lipschitz condition with constant <math>k</math>. Apply the method of bounded differences,
:<math>
\Pr\left[\left|Y-\frac{n-k+1}{m^k}\right|\ge tk\sqrt{n}\right]=\Pr\left[\left|Y-\mathbf{E}[Y]\right|\ge  tk\sqrt{n}\right]\le 2e^{-t^2/2}
</math>
=== Combining unit vectors ===
Let <math>u_1,\ldots,u_n</math> be <math>n</math> unit vectors from some normed space. That is, <math>\|u_i\|=1</math> for any <math>1\le i\le n</math>, where <math>\|\cdot\|</math> denote the vector norm (e.g. <math>\ell_1,\ell_2,\ell_\infty</math>) of the space.
Let <math>\epsilon_1,\ldots,\epsilon_n\in\{-1,+1\}</math> be independently chosen and <math>\Pr[\epsilon_i=-1]=\Pr[\epsilon_i=1]=1/2</math>.
Let
:<math>v=\epsilon_1u_1+\cdots+\epsilon_nu_n,
</math>
and
:<math>
X=\|v\|.
</math>
This kind of construction is very useful in combinatorial proofs of metric problems. We will show that by this construction, the random variable <math>X</math> is well concentrated around its mean.
<math>X</math> is a function of independent random variables <math>\epsilon_1,\ldots,\epsilon_n</math>.
By the triangle inequality for norms, it is easy to verify that changing the sign of a unit vector <math>u_i</math> can only change the value of <math>X</math> for at most 2, thus <math>X</math> satisfies the Lipschitz condition with constant 2. The concentration result follows by applying the method of bounded differences:
:<math>
\Pr[|X-\mathbf{E}[X]|\ge 2t\sqrt{n}]\le 2e^{-t^2/2}.
</math>

Latest revision as of 09:36, 2 April 2014

Existence by Counting

Shannon's circuit lower bound

This is a fundamental problem in in Computer Science.

A boolean function is a function in the form [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Boolean circuit is a mathematical model of computation. Formally, a boolean circuit is a directed acyclic graph. Nodes with indegree zero are input nodes, labeled [math]\displaystyle{ x_1, x_2, \ldots , x_n }[/math]. A circuit has a unique node with outdegree zero, called the output node. Every other node is a gate. There are three types of gates: AND, OR (both with indegree two), and NOT (with indegree one).

Computations in Turing machines can be simulated by circuits, and any boolean function in P can be computed by a circuit with polynomially many gates. Thus, if we can find a function in NP that cannot be computed by any circuit with polynomially many gates, then NP[math]\displaystyle{ \neq }[/math]P.

The following theorem due to Shannon says that functions with exponentially large circuit complexity do exist.

Theorem (Shannon 1949)
There is a boolean function [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math] with circuit complexity greater than [math]\displaystyle{ \frac{2^n}{3n} }[/math].
Proof.

We first count the number of boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math]. There are [math]\displaystyle{ 2^{2^n} }[/math] boolean functions [math]\displaystyle{ f:\{0,1\}^n\rightarrow \{0,1\} }[/math].

Then we count the number of boolean circuit with fixed number of gates. Fix an integer [math]\displaystyle{ t }[/math], we count the number of circuits with [math]\displaystyle{ t }[/math] gates. By the De Morgan's laws, we can assume that all NOTs are pushed back to the inputs. Each gate has one of the two types (AND or OR), and has two inputs. Each of the inputs to a gate is either a constant 0 or 1, an input variable [math]\displaystyle{ x_i }[/math], an inverted input variable [math]\displaystyle{ \neg x_i }[/math], or the output of another gate; thus, there are at most [math]\displaystyle{ 2+2n+t-1 }[/math] possible gate inputs. It follows that the number of circuits with [math]\displaystyle{ t }[/math] gates is at most [math]\displaystyle{ 2^t(t+2n+1)^{2t} }[/math].

If [math]\displaystyle{ t=2^n/3n }[/math], then

[math]\displaystyle{ \frac{2^t(t+2n+1)^{2t}}{2^{2^n}}=o(1)\lt 1, }[/math] thus, [math]\displaystyle{ 2^t(t+2n+1)^{2t} \lt 2^{2^n}. }[/math]

Each boolean circuit computes one boolean function. Therefore, there must exist a boolean function [math]\displaystyle{ f }[/math] which cannot be computed by any circuits with [math]\displaystyle{ 2^n/3n }[/math] gates.

[math]\displaystyle{ \square }[/math]

Note that by Shannon's theorem, not only there exists a boolean function with exponentially large circuit complexity, but almost all boolean functions have exponentially large circuit complexity.

Double counting

The double counting principle states the following obvious fact: if the elements of a set are counted in two different ways, the answers are the same.

Handshaking lemma

The following lemma is a standard demonstration of double counting.

Handshaking Lemma
At a party, the number of guests who shake hands an odd number of times is even.

We model this scenario as an undirected graph [math]\displaystyle{ G(V,E) }[/math] with [math]\displaystyle{ |V|=n }[/math] standing for the [math]\displaystyle{ n }[/math] guests. There is an edge [math]\displaystyle{ uv\in E }[/math] if [math]\displaystyle{ u }[/math] and [math]\displaystyle{ v }[/math] shake hands. Let [math]\displaystyle{ d(v) }[/math] be the degree of vertex [math]\displaystyle{ v }[/math], which represents the number of times that [math]\displaystyle{ v }[/math] shakes hand. The handshaking lemma states that in any undirected graph, the number of vertices whose degrees are odd is even. It is sufficient to show that the sum of odd degrees is even.

The handshaking lemma is a direct consequence of the following lemma, which is proved by Euler in his 1736 paper on Seven Bridges of Königsberg that began the study of graph theory.

Lemma (Euler 1736)
[math]\displaystyle{ \sum_{v\in V}d(v)=2|E| }[/math]
Proof.

We count the number of directed edges. A directed edge is an ordered pair [math]\displaystyle{ (u,v) }[/math] such that [math]\displaystyle{ \{u,v\}\in E }[/math]. There are two ways to count the directed edges.

First, we can enumerate by edges. Pick every edge [math]\displaystyle{ uv\in E }[/math] and apply two directions [math]\displaystyle{ (u,v) }[/math] and [math]\displaystyle{ (v,u) }[/math] to the edge. This gives us [math]\displaystyle{ 2|E| }[/math] directed edges.

On the other hand, we can enumerate by vertices. Pick every vertex [math]\displaystyle{ v\in V }[/math] and for each of its [math]\displaystyle{ d(v) }[/math] neighbors, say [math]\displaystyle{ u }[/math], generate a directed edge [math]\displaystyle{ (v,u) }[/math]. This gives us [math]\displaystyle{ \sum_{v\in V}d(v) }[/math] directed edges.

It is obvious that the two terms are equal, since we just count the same thing twice with different methods. The lemma follows.

[math]\displaystyle{ \square }[/math]

The handshaking lemma is implied directly by the above lemma, since the sum of even degrees is even.

Sperner's lemma

A triangulation of a triangle [math]\displaystyle{ abc }[/math] is a decomposition of [math]\displaystyle{ abc }[/math] to small triangles (called cells), such that any two different cells are either disjoint, or share an edge, or a vertex.

A proper coloring of a triangulation of triangle [math]\displaystyle{ abc }[/math] is a coloring of all vertices in the triangulation with three colors: red, blue, and green, such that the following constraints are satisfied:

  • The three vertices [math]\displaystyle{ a,b }[/math], and [math]\displaystyle{ c }[/math] of the big triangle receive all three colors.
  • The vertices in each of the three lines [math]\displaystyle{ ab }[/math], [math]\displaystyle{ bc }[/math], and [math]\displaystyle{ ac }[/math] receive two colors.

The following figure is an example of a properly colored triangulation.

In 1928 young Emanuel Sperner gave a combinatorial proof of the famous Brouwer's fixed point theorem by proving the following lemma (now called Sperner's lemma), with an extremely elegant proof.

Sperner's Lemma (1928)
For any properly colored triangulation, there exists a cell receiving all three colors.
Proof.

The proof is done by appropriately constructing a dual graph of the triangulation.

The dual graph is defined as follows:

  • Each cell in the triangulation corresponds to a distinct vertex in the dual graph.
  • The outer space corresponds to a distinct vertex in the dual graph.
  • An edge is added between two vertices in the dual graph if the corresponding cells share a [math]\displaystyle{ {\color{Red}\mbox{red}}\mbox{--}{\color{Blue}\mbox{blue}} }[/math] edge.

The following is an example of the dual graph of a properly colored triangulation:

For vertices in the dual graph:

  • If a cell receives all three colors, the corresponding vertex in the dual graph has degree 1;
  • if a cell receives only [math]\displaystyle{ {\color{Red}\mbox{red}} }[/math] and [math]\displaystyle{ {\color{Blue}\mbox{blue}} }[/math], the corresponding vertex has degree 2;
  • for all other cases (the cell is monochromatic, or does not have blue or red), the corresponding vertex has degree 0.

Besides, the unique vertex corresponding to the outer space must have odd degree, since the number of [math]\displaystyle{ {\color{Red}\mbox{red}}\mbox{--}{\color{Blue}\mbox{blue}} }[/math] transitions between a [math]\displaystyle{ {\color{Red}\mbox{red}} }[/math] endpoint and a [math]\displaystyle{ {\color{Blue}\mbox{blue}} }[/math] endpoint must be odd.

By handshaking lemma, the number of odd-degree vertices in the dual graph is even, thus the number of cells receiving all three colors must be odd, which cannot be zero.

[math]\displaystyle{ \square }[/math]

The Pigeonhole Principle

The pigeonhole principle states the following "obvious" fact:

[math]\displaystyle{ n+1 }[/math] pigeons cannot sit in [math]\displaystyle{ n }[/math] holes so that every pigeon is alone in its hole.

This is one of the oldest non-constructive principles: it states only the existence of a pigeonhole with more than one pigeons and says nothing about how to find such a pigeonhole.

The general form of pigeonhole principle, also known as the averaging principle, is stated as follows.

Generalized pigeonhole principle
If a set consisting of more than [math]\displaystyle{ mn }[/math] objects is partitioned into [math]\displaystyle{ n }[/math] classes, then some class receives more than [math]\displaystyle{ m }[/math] objects.

Inevitable divisors

The following is one of Erdős' favorite initiation questions to mathematics. The proof uses the Pigeonhole Principle.

Theorem
For any subset [math]\displaystyle{ S\subseteq\{1,2,\ldots,2n\} }[/math] of size [math]\displaystyle{ |S|\gt n\, }[/math], there are two numbers [math]\displaystyle{ a,b\in S }[/math] such that [math]\displaystyle{ a|b\, }[/math].
Proof.

For every odd number [math]\displaystyle{ m\in\{1,2,\ldots,2n\} }[/math], let

[math]\displaystyle{ C_m=\{2^km\mid k\ge 0, 2^km\le 2n\} }[/math].

It is easy to see that for any [math]\displaystyle{ b\lt a }[/math] from the same [math]\displaystyle{ C_m }[/math], it holds that [math]\displaystyle{ a|b }[/math].

Every number [math]\displaystyle{ a\in S }[/math] can be uniquely represented as [math]\displaystyle{ a=2^km }[/math] for some odd number [math]\displaystyle{ m }[/math], thus belongs to exactly one of [math]\displaystyle{ C_m }[/math], for odd [math]\displaystyle{ m\in\{1,2,\ldots, 2n\} }[/math]. There are [math]\displaystyle{ n }[/math] odd numbers in [math]\displaystyle{ \{1,2,\ldots,2n\} }[/math], thus [math]\displaystyle{ n }[/math] different [math]\displaystyle{ C_m }[/math], but [math]\displaystyle{ |S|\gt n }[/math], thus there must exist distinct [math]\displaystyle{ a,b\in S }[/math], supposed that [math]\displaystyle{ b\lt a }[/math], belonging to the same [math]\displaystyle{ C_m }[/math], which implies that [math]\displaystyle{ a|b }[/math].

[math]\displaystyle{ \square }[/math]

Monotonic subsequences

Let [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] be a sequence of [math]\displaystyle{ n }[/math] distinct real numbers. A subsequence is a sequence of distinct terms of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] appearing in the same order in which they appear in [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math]. Formally, a subsequence of [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is an [math]\displaystyle{ (a_{i_1},a_{i_2},\ldots,a_{i_k}) }[/math], with [math]\displaystyle{ i_1\lt i_2\lt \cdots\lt i_k }[/math].

A sequence [math]\displaystyle{ (a_1,a_2,\ldots,a_n) }[/math] is increasing if [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math], and decreasing if [math]\displaystyle{ a_1\gt a_2\gt \cdots\gt a_n }[/math].

We are interested in the longest increasing and decreasing subsequences of an [math]\displaystyle{ a_1\lt a_2\lt \cdots\lt a_n }[/math]. It is intuitive that the length of both the longest increasing subsequence and the longest decreasing subsequence cannot be small simultaneously. A famous result of Erdős and Szekeres formally justifies this intuition. This is one of the first results in extremal combinatorics, published in the influential 1935 paper of Erdős and Szekeres.

Theorem (Erdős-Szekeres 1935)
A sequence of more than [math]\displaystyle{ mn }[/math] different real numbers must contain either an increasing subsequence of length [math]\displaystyle{ m+1 }[/math], or a decreasing subsequence of length [math]\displaystyle{ n+1 }[/math].
Proof.
(due to Seidenberg 1959)

Let [math]\displaystyle{ (a_1,a_2,\ldots,a_{N}) }[/math] be the original sequence of [math]\displaystyle{ N\gt mn }[/math] distinct real numbers. Associate each [math]\displaystyle{ a_i }[/math] a pair [math]\displaystyle{ (x_i,y_i) }[/math], defined as:

  • [math]\displaystyle{ x_i }[/math]: the length of the longest increasing subsequence ending at [math]\displaystyle{ a_i }[/math];
  • [math]\displaystyle{ y_i }[/math]: the length of the longest decreasing subsequence starting at [math]\displaystyle{ a_i }[/math].

A key observation is that [math]\displaystyle{ (x_i,y_i)\neq (x_j,y_j) }[/math] whenever [math]\displaystyle{ i\neq j }[/math]. This is proved as follows:

Case 1: If [math]\displaystyle{ a_i\lt a_j }[/math], then the longest increasing subsequence ending at [math]\displaystyle{ a_i }[/math] can be extended by adding on [math]\displaystyle{ a_j }[/math], so [math]\displaystyle{ x_i\lt x_j }[/math].
Case 2: If [math]\displaystyle{ a_i\gt a_j }[/math], then the longest decreasing subsequence starting at [math]\displaystyle{ a_j }[/math] can be preceded by [math]\displaystyle{ a_i }[/math], so [math]\displaystyle{ y_i\gt y_j }[/math].

Now we put [math]\displaystyle{ N }[/math] "pigeons" [math]\displaystyle{ a_1,a_2,\ldots,a_N }[/math] into "pigeonholes" [math]\displaystyle{ \{1,2,\ldots,N\}\times\{1,2,\ldots,N\} }[/math], such that [math]\displaystyle{ a_i }[/math] is put into hole [math]\displaystyle{ (x_i,y_i) }[/math], with at most one pigeon per each hole (since different [math]\displaystyle{ a_i }[/math] has different [math]\displaystyle{ (x_i,y_i) }[/math]).

The number of pigeons is [math]\displaystyle{ N\gt mn }[/math]. Due to pigeonhole principle, there must be a pigeon which is outside the region [math]\displaystyle{ \{1,2,\ldots,m\}\times\{1,2,\ldots,n\} }[/math], which implies that there exists an [math]\displaystyle{ a_i }[/math] with either [math]\displaystyle{ x_i\gt m }[/math] or [math]\displaystyle{ y_i\gt n }[/math]. Due to our definition of [math]\displaystyle{ (x_i,y_i) }[/math], there must be either an increasing subsequence of length [math]\displaystyle{ m+1 }[/math], or a decreasing subsequence of length [math]\displaystyle{ n+1 }[/math].

[math]\displaystyle{ \square }[/math]

Dirichlet's approximation

Let [math]\displaystyle{ x }[/math] be an irrational number. We now want to approximate [math]\displaystyle{ x }[/math] be a rational number (a fraction).

Since every real interval [math]\displaystyle{ [a,b] }[/math] with [math]\displaystyle{ a\lt b }[/math] contains infinitely many rational numbers, there must exist rational numbers arbitrarily close to [math]\displaystyle{ x }[/math]. The trick is to let the denominator of the fraction sufficiently large.

Suppose however we restrict the rationals we may select to have denominators bounded by [math]\displaystyle{ n }[/math]. How closely we can approximate [math]\displaystyle{ x }[/math] now?

The following important theorem is due to Dirichlet and his Schubfachprinzip ("drawer principle"). The theorem is fundamental in numer theory and real analysis, but the proof is combinatorial.

Theorem (Dirichlet 1879)
Let [math]\displaystyle{ x }[/math] be an irrational number. For any natural number [math]\displaystyle{ n }[/math], there is a rational number [math]\displaystyle{ \frac{p}{q} }[/math] such that [math]\displaystyle{ 1\le q\le n }[/math] and
[math]\displaystyle{ \left|x-\frac{p}{q}\right|\lt \frac{1}{nq} }[/math].
Proof.

Let [math]\displaystyle{ \{x\}=x-\lfloor x\rfloor }[/math] denote the fractional part of the real number [math]\displaystyle{ x }[/math]. It is obvious that [math]\displaystyle{ \{x\}\in[0,1) }[/math] for any real number [math]\displaystyle{ x }[/math].

Consider the [math]\displaystyle{ n+1 }[/math] numbers [math]\displaystyle{ \{kx\} }[/math], [math]\displaystyle{ k=1,2,\ldots,n+1 }[/math]. These [math]\displaystyle{ n+1 }[/math] numbers (pigeons) belong to the following [math]\displaystyle{ n }[/math] intervals (pigeonholes):

[math]\displaystyle{ \left(0,\frac{1}{n}\right),\left(\frac{1}{n},\frac{2}{n}\right),\ldots,\left(\frac{n-1}{n},1\right) }[/math].

Since [math]\displaystyle{ x }[/math] is irrational, [math]\displaystyle{ \{kx\} }[/math] cannot coincide with any endpoint of the above intervals.

By the pigeonhole principle, there exist [math]\displaystyle{ 1\le a\lt b\le n+1 }[/math], such that [math]\displaystyle{ \{ax\},\{bx\} }[/math] are in the same interval, thus

[math]\displaystyle{ |\{bx\}-\{ax\}|\lt \frac{1}{n} }[/math].

Therefore,

[math]\displaystyle{ |(b-a)x-\left(\lfloor bx\rfloor-\lfloor ax\rfloor\right)|\lt \frac{1}{n} }[/math].

Let [math]\displaystyle{ q=b-a }[/math] and [math]\displaystyle{ p=\lfloor bx\rfloor-\lfloor ax\rfloor }[/math]. We have [math]\displaystyle{ |qx-p|\lt \frac{1}{n} }[/math] and [math]\displaystyle{ 1\le q\le n }[/math]. Dividing both sides by [math]\displaystyle{ q }[/math], the theorem is proved.

[math]\displaystyle{ \square }[/math]