# 组合数学 (Fall 2011)/Matching theory

## Systems of Distinct Representatives (SDR)

A system of distinct representatives (SDR) (also called a transversal) for a sequence of (not necessarily distinct) sets ${\displaystyle S_{1},S_{2},\ldots ,S_{m}}$ is a sequence of distinct elements ${\displaystyle x_{1},x_{2},\ldots ,x_{m}}$ such that ${\displaystyle x_{i}\in S_{i}}$ for all ${\displaystyle i=1,2,\ldots ,m}$.

### Hall's marriage theorem

If the sets ${\displaystyle S_{1},S_{2},\ldots ,S_{m}}$ have a system of distinct representatives ${\displaystyle x_{1}\in S_{1},x_{2}\in S_{2},\ldots ,x_{m}\in S_{m}}$, then it is obvious that ${\displaystyle \left|S_{1}\cup S_{2}\cup \cdots \cup S_{m}\right|\geq |\{x_{1},x_{2},\ldots ,x_{m}\}|=m}$. Moreover, for any subset ${\displaystyle I\subseteq \{1,2,\ldots ,m\}}$,

${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|\geq |\{x_{i}\mid i\in I\}|=|I|}$

because ${\displaystyle x_{1},x_{2},\ldots ,x_{m}}$ are distinct.

Surprisingly, this obvious necessary condition for the existence of SDR is also sufficient, which is stated by the Hall's theorem, also called the mariage theorem.

 Hall's Theorem The sets ${\displaystyle S_{1},S_{2},\ldots ,S_{m}}$ have a system of distinct representatives (SDR) if and only if ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|\geq |I|}$ for all ${\displaystyle I\subseteq \{1,2,\ldots ,m\}}$.

The condition that ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|\geq |I|}$ for all ${\displaystyle I\subseteq \{1,2,\ldots ,m\}}$ is also called the Hall's condition.

Proof.
 We only need to prove the sufficiency of Hall's condition for the existence of SDR. We do it by induction on ${\displaystyle m}$. When ${\displaystyle m=1}$, the theorem trivially holds. Now assume the theorem hold for any integer smaller than ${\displaystyle m}$. A subcollection of sets ${\displaystyle \{S_{i}\mid i\in I\}}$, ${\displaystyle |I|, is called a critical family if ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|=|I|}$. Case.1: There is no critical family, i.e. for each ${\displaystyle I\subset \{1,2,\ldots ,m\}}$, ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|>|I|}$. Take ${\displaystyle S_{m}}$ and choose an arbitrary ${\displaystyle x\in S_{m}}$ as its representative. Remove ${\displaystyle x}$ from all other sets by letting ${\displaystyle S'_{i}=S_{i}\setminus \{x\}}$ for ${\displaystyle 1\leq i\leq m-1}$. Then for all ${\displaystyle I\subseteq \{1,2,\ldots ,m-1\}}$, ${\displaystyle \left|\bigcup _{i\in I}S_{i}'\right|\geq \left|\bigcup _{i\in I}S_{i}\right|-1\geq |I|}$. Due to the induction hypothesis, ${\displaystyle S_{1},\ldots ,S_{m}}$ have an SDR, say ${\displaystyle x_{1}\in S_{1},\ldots ,x_{m-1}\in S_{m-1}}$. It is obvious that none of them equals ${\displaystyle x}$ because ${\displaystyle x}$ is removed. Thus, ${\displaystyle x_{1},\ldots ,x_{m-1}}$ and ${\displaystyle x\,}$ form an SDR for ${\displaystyle S_{1},S_{2},\ldots ,S_{m}}$. Case.2: There is a critical family, i.e. ${\displaystyle \exists I\subset \{1,2,\ldots ,m\},|I|, such that ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|=|I|}$. Suppose ${\displaystyle S_{m-k+1},\ldots ,S_{m}}$ are such a collection of ${\displaystyle k sets. Hall's condition certainly holds for these sets. Since ${\displaystyle k, due to the induction hypothesis, there is an SDR for the sets, say ${\displaystyle x_{m-k+1}\in S_{m-k+1},\ldots ,x_{m}\in S_{m}}$. Again, remove the ${\displaystyle k}$ elements from the remaining sets by letting ${\displaystyle S'_{i}=S_{i}\setminus \{x_{m-k+1},\ldots ,x_{m}\}}$ for ${\displaystyle 1\leq i\leq m-k}$. By the Hall's condition, for any ${\displaystyle I\subseteq \{1,2,\ldots ,m-k\}}$, writing that ${\displaystyle S=\bigcup _{i\in I}S_{i}\cup \bigcup _{i=m-k+1}^{m}S_{i}}$, ${\displaystyle \left|S\right|\geq |I|+k}$, thus ${\displaystyle \left|\bigcup _{i\in I}S_{i}'\right|\geq |S|-\left|\bigcup _{i=m-k+1}^{m}S_{i}\right|\geq |I|}$. Due to the induction hypothesis, there is an SDR for ${\displaystyle S_{1},\ldots ,S_{m-k}}$, say ${\displaystyle x_{1}\in S_{1},\ldots ,x_{m-k}\in S_{m-k}}$. Combining it with the SDR ${\displaystyle x_{m-k+1},\ldots ,x_{m}}$ for ${\displaystyle S_{m-k+1},\ldots ,S_{m}}$, we have an SDR for ${\displaystyle S_{1},\ldots ,S_{m}}$.
${\displaystyle \square }$

Hall's theorem is usually stated as a theorem for the existence of matching in a bipartite graph.

In a graph ${\displaystyle G(V,E)}$, a matching ${\displaystyle M\subseteq E}$ is an independent set for edges, that is, for any ${\displaystyle e_{1},e_{2}\in M}$ that ${\displaystyle e_{1}\neq e_{2}}$, ${\displaystyle e_{1}}$ and ${\displaystyle e_{2}}$ are not adjacent to the same vertex.

In a bipartite graph ${\displaystyle G(U,V,E)}$, we say ${\displaystyle M}$ is a matching of ${\displaystyle U}$ (or a matching of ${\displaystyle V}$), if every vertex in ${\displaystyle U}$ (or ${\displaystyle V}$) is adjacent to some edge in ${\displaystyle M}$, i.e., all vertices in ${\displaystyle U}$ (or ${\displaystyle V}$) are matched.

In a graph ${\displaystyle G(V,E)}$, for any vertex ${\displaystyle v\in V}$, let ${\displaystyle N(v)}$ denote the set of neighbors of ${\displaystyle v}$ in ${\displaystyle G}$; and for any vertex set ${\displaystyle S\subseteq V}$, we override the notation as ${\displaystyle N(S)=\bigcup _{v\in S}N(v)}$, i.e. the set of vertices that are adjacent to one of the vertices in ${\displaystyle S}$.

 Hall's Theorem (graph theory form) A bipartite graph ${\displaystyle G(U,V,E)}$ has a matching of ${\displaystyle U}$ if and only if ${\displaystyle \left|N(S)\right|\geq |S|}$ for all ${\displaystyle S\subseteq U}$.

Consider the collection of sets ${\displaystyle N(u),u\in U}$. A matching of ${\displaystyle U}$ is an SDR for these sets. Then clearly the theorem is equivalent to Hall's theorem.

### Doubly stochastic matrices

Although seemed specialized, the Hall's theorem is among the most useful tools in combinatorics, and is used to prove many seemingly unrelated results. One example is Birkhoff's theorem.

An ${\displaystyle n\times n}$ nonnegative matrix ${\displaystyle A=(a_{ij})}$ is called a doubly stochastic matrix if ${\displaystyle \sum _{j}a_{ij}=1\,}$ for every row ${\displaystyle i}$ and ${\displaystyle \sum _{i}a_{ij}=1\,}$ for every column ${\displaystyle j}$.

Doubly stochastic matrix has significance in studying of Markov chains.

A convex combination is a special type of linear combination, such that ${\displaystyle \lambda _{1}x_{1}+\lambda _{2}x_{2}+\cdots +\lambda _{n}x_{n}}$ is said to be a convex combination of ${\displaystyle x_{1},x_{2},\ldots ,x_{n}}$ if ${\displaystyle \lambda _{i}\geq 0}$ for all ${\displaystyle i=1,2\ldots ,n}$, and ${\displaystyle \sum _{i=1}^{n}\lambda _{i}=1}$.

A permutation matrix is a 0-1 matrix such that every row and every column has exactly one 1-entry.

A fundamental result regarding doubly stochastic matrices is the Birkhoff-von Neumann theorem, which states an amazing fact that every doubly stochastic matrix can be expressed as a convex combination of finite number of permutation matrices.

 Theorem (Birkhoff 1949; von Neumann 1953) Every doubly stochastic matrix is a convex combination of permutation matrices.

Surprisingly, this linear algebra theorem is proved by using the Hall's theorem, a combinatorial tool.

Proof.
 We prove a more general result: Every ${\displaystyle n\times n}$ nonnegative matrix ${\displaystyle A=(a_{ij})}$ with ${\displaystyle \forall i,\sum _{j=1}^{n}a_{ij}=\gamma }$ and ${\displaystyle \forall j,\sum _{i=1}^{n}a_{ij}=\gamma }$, for some ${\displaystyle \gamma >0}$, can be expressed as a linear combination ${\displaystyle A=\sum _{i=1}^{s}\lambda _{i}P_{i}}$ of permutation matrices ${\displaystyle P_{1},\ldots ,P_{s}}$, where ${\displaystyle \lambda _{1},\ldots ,\lambda _{s}}$ are nonnegative reals such that ${\displaystyle \sum _{i=1}^{s}\lambda _{i}=\gamma }$. The Birkhoff-von Neumann theorem is a special case of above statement with ${\displaystyle \gamma =1}$. We prove by induction on the number of non-zero entries in ${\displaystyle A}$, denoted by ${\displaystyle m}$. Since all the row and column sums equal to some ${\displaystyle \gamma >0}$, there are at least ${\displaystyle n}$ such entries. If there are exactly ${\displaystyle n}$ non-zero entries, the only possible ${\displaystyle A}$ with ${\displaystyle \forall i,\sum _{j=1}^{n}a_{ij}=\gamma }$ and ${\displaystyle \forall j,\sum _{i=1}^{n}a_{ij}=\gamma }$ is that ${\displaystyle A=\gamma P}$ for some permutation matrix ${\displaystyle P}$. Now suppose ${\displaystyle A}$ has ${\displaystyle m>n}$ non-zero entries and the theorem holds for matrices with a smaller number of non-zero entries. Define ${\displaystyle S_{i}=\{j\mid a_{ij}>0\},\quad i=1,2,\ldots ,n,}$ and observe that ${\displaystyle S_{1},S_{2},\ldots ,S_{n}}$ satisfy Hall's condition. Otherwise if there exists an ${\displaystyle I\subseteq \{1,2,\ldots ,m\}}$ such that ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|<|I|}$, then all the non-zero entries of the rows in ${\displaystyle I}$ would occupy less than ${\displaystyle |I|}$ columns; hence the sum of these entries ${\displaystyle \sum _{i\in I}\sum _{j\in S_{j}}a_{ij}\leq \sum _{j\in \bigcup _{i\in I}S_{i}}\sum _{i=1}^{n}a_{ij}<|I|\gamma }$, which contradicts that ${\displaystyle \sum _{i\in I}\sum _{j\in S_{j}}a_{ij}=\sum _{i\in I}\sum _{j=1}^{n}a_{ij}=|I|\gamma }$. By Hall's theorem, there is an SDR ${\displaystyle j_{1}\in S_{1},\ldots ,j_{n}\in S_{n}}$. Take the permutation matrix ${\displaystyle P=(p_{ij})}$ with entries ${\displaystyle p_{ij}=1}$ if and only if ${\displaystyle j=j_{i}}$. Let ${\displaystyle \lambda =\min\{a_{ij}\mid j=j_{i},i=1,2,\ldots ,n\}}$, and consider the matrix ${\displaystyle A'=A-\lambda P}$. It is obvious that ${\displaystyle A'}$ has less non-zero entries than ${\displaystyle A}$ has. Moreover, ${\displaystyle A'}$ satisfies that ${\displaystyle \forall i,\sum _{j=1}^{n}a'_{ij}=\gamma -\lambda }$ and ${\displaystyle \forall j,\sum _{i=1}^{n}a'_{ij}=\gamma -\lambda }$. Apply the induction hypothesis, ${\displaystyle A'=\sum _{i=1}^{s}\lambda _{i}P_{i}}$ for permutation matrices ${\displaystyle P_{1},\ldots ,P_{m}}$ and ${\displaystyle \lambda _{1},\ldots ,\lambda _{s}>0}$ with ${\displaystyle \sum _{i=1}^{s}\lambda _{i}=\gamma -\lambda }$. Therefore, ${\displaystyle A=\lambda P+A'=\lambda P+\sum _{i=1}^{s}\lambda _{i}P_{i}}$, where ${\displaystyle \lambda +\sum _{i=1}^{s}\lambda _{i}=\lambda +(\gamma -\lambda )=\gamma }$.
${\displaystyle \square }$

### Min-max theorems

In combinatorics (and also in other branches of mathematics), there is a family of theorems which relate the minimum of one thing to the maximum of something else. The following are some examples.

• König-Egerváry theorem (König 1931; Egerváry 1931): in a bipartite graph, the maximum number of edges in a matching equals the minimum number of vertices in a vertex cover.
• Menger's theorem (Menger 1927): the minimum number of vertices separating two given vertices in a graph equals the maximum number of vertex-disjoint paths between the two vertices.
• Dilworth's theorem (Dilworth 1950): the minimum number of chains which cover a partially ordered set equals the maximum number of elements in an antichain.

We will present the König-Egerváry theorem for bipartite graphs.

We already know that a matching is just an independent edge set.

A vertex cover in a graph ${\displaystyle G(V,E)}$ is a vertex set ${\displaystyle C\subseteq V}$ such that every edge ${\displaystyle e\in E}$ is adjacent to some ${\displaystyle u\in C}$, that is, all edges in the graph are "covered" by some vertex in the vertex cover ${\displaystyle C}$.

The König-Egerváry theorem (also called the König's theorem) states the equality of the sizes of maximum matching and minimum vertex cover.

 König-Egerváry Theorem (graph theory form) In any bipartite graph, the size of a maximum matching equals the size of a minimum vertex cover.

The König-Egerváry theorem can be reformulated in its matrix form. A bipartite graph ${\displaystyle G(U,V,E)}$ can be represented as a ${\displaystyle |U|\times |V|}$ matrix ${\displaystyle A}$ with 0-1 entries. For any ${\displaystyle u\in U}$ and ${\displaystyle v\in V}$, ${\displaystyle A(u,v)=1}$ if and only if ${\displaystyle uv\in E}$. (Note that this definition is different from adjacency matrix for graphs.)

Then, a matching in ${\displaystyle G(U,V,E)}$ corresponds to a set of independent 1's in ${\displaystyle A}$: a set of 1's that do not share rows or columns. A vertex cover corresponds to a set of rows and columns covering all 1's in ${\displaystyle A}$: a set of rows and columns that every 1-entry in ${\displaystyle A}$ belongs to at least one of these rows or columns.

It is easy to see the König-Egerváry theorem for bipartite graphs can be equivalently described as follows:

 König-Egerváry Theorem (matrix form) Let ${\displaystyle A}$ be an ${\displaystyle m\times n}$ 0-1 matrix. The maximum number of independent 1's is equal to the minimum number of rows and columns required to cover all the 1's in ${\displaystyle A}$.

We give a proof by the Hall's theorem.

Proof.
 Let ${\displaystyle r}$ denote the maximum number of independent 1's in ${\displaystyle A}$ and ${\displaystyle s}$ be the minimum number of rows and columns to cover all 1's in ${\displaystyle A}$. Clearly, ${\displaystyle r\leq s}$, since any set of ${\displaystyle r}$ independent 1's requires together ${\displaystyle r}$ rows and columns to cover. We now prove ${\displaystyle r\geq s}$. Assume that some ${\displaystyle a}$ rows and ${\displaystyle b}$ columns cover all the 1's in ${\displaystyle A}$, and ${\displaystyle s=a+b}$, i.e. the covering is minimum. Because permuting the rows or the columns change neither ${\displaystyle r}$ nor ${\displaystyle s}$ (as reordering the vertices on either side in a bipartite graph changes nothing to the size of matchings and vertex covers), we may assume that the first ${\displaystyle a}$ rows and the first ${\displaystyle b}$ columns cover the 1's. Write ${\displaystyle A}$ in the form ${\displaystyle A={\begin{bmatrix}B_{a\times b}&C_{a\times (n-b)}\\B_{(m-a)\times b}&E_{(m-a)\times (n-b)}\end{bmatrix}}}$, where the submatrix ${\displaystyle E}$ has only zero entries. We will show that there are ${\displaystyle a}$ independent 1's in ${\displaystyle C}$ and ${\displaystyle b}$ independent 1's in ${\displaystyle D}$, thus together ${\displaystyle A}$ has ${\displaystyle a+b=s}$ independent 1's, which will imply that ${\displaystyle r\geq s}$, as desired. Define ${\displaystyle S_{i}=\{j\mid c_{ij}=1\}}$ It is obvious that ${\displaystyle S_{i}\subseteq \{1,2,\ldots ,n-b\}}$. We claim that the sequence ${\displaystyle S_{1},S_{2},\ldots ,S_{a}}$ has a system of distinct representatives, i.e., we can choose a 1 from each row, no two in the same column. Otherwise, Hall's theorem tells us that there exists some ${\displaystyle I\subseteq \{1,2,\ldots ,a\}}$, such that ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|<|I|}$, that is, the 1's in the rows contained by ${\displaystyle I}$ can be covered by less than ${\displaystyle |I|}$ columns. Thus, the 1's in ${\displaystyle C}$ can be covered by ${\displaystyle a-|I|}$ and less than ${\displaystyle |I|}$ columns, altogether less than ${\displaystyle a}$ rows and columns. Therefore, the 1's in ${\displaystyle A}$ can be covered by less than ${\displaystyle a+b}$ rows and columns, which contradicts the assumption that the size of the minimum covering of all 1's in ${\displaystyle A}$ is ${\displaystyle a+b}$. Therefore, we show that ${\displaystyle C}$ has ${\displaystyle a}$ independent 1's. By the same argument, we can show that ${\displaystyle D}$ has ${\displaystyle b}$ independent 1's. Since ${\displaystyle C}$ and ${\displaystyle D}$ share no rows or columns, the number of independent 1's in ${\displaystyle A}$ is ${\displaystyle r\geq a+b=s}$.
${\displaystyle \square }$

## Chains and antichains

Recall that a partially ordered set (or poset) consists of a set ${\displaystyle P}$ and a binary relation ${\displaystyle \leq }$ defined on ${\displaystyle P}$, satisfying

• reflexivity: ${\displaystyle x\leq x}$;
• antisymmetry: ${\displaystyle x\leq y}$ and ${\displaystyle y\leq x}$ only if ${\displaystyle x=y}$;
• transitivity: if ${\displaystyle x\leq y}$ and ${\displaystyle y\leq z}$, then ${\displaystyle x\leq z}$.

Two elements ${\displaystyle x,y\in P}$ are said to be comparable if ${\displaystyle x\leq y}$ or ${\displaystyle y\leq x}$; and ${\displaystyle x,y\in P}$ are incomparable if otherwise.

A poset ${\displaystyle P}$ is a totally ordered set, or a chain, if all pairs of elements in ${\displaystyle P}$ are comparable. A poset ${\displaystyle P}$ is an antichain if all pairs of elements in ${\displaystyle P}$ are incomparable.

### Dilworth's theorem

Given a poset ${\displaystyle P}$, we can partition it into chains. What is the minimum number of chains that we can break ${\displaystyle P}$ into? Dilworth's theorem tells us that it is equal to the size of the maximum antichain.

 Dilworth's Theorem Suppose that the largest antichain in the poset ${\displaystyle P}$ has size ${\displaystyle r}$. Then ${\displaystyle P}$ can be partitioned into ${\displaystyle r}$ disjoint chains.
Proof.
 Suppose ${\displaystyle P}$ has an antichain ${\displaystyle |A|}$, and ${\displaystyle P}$ can be partitioned into disjoint chains ${\displaystyle C_{1},C_{2},\ldots ,C_{s}}$. Then ${\displaystyle |A|\leq s}$, since every chain can pass though an antichain at most once, that is, ${\displaystyle |C_{i}\cap A|\leq 1}$ for all ${\displaystyle i=1,2,\ldots ,s}$. Therefore, we only need to prove that there exist an antichain ${\displaystyle A\subseteq P}$ of size ${\displaystyle r}$, and a partition of ${\displaystyle P}$ into at most ${\displaystyle r}$ chains. Define a bipartite graph ${\displaystyle G(U,V,E)}$ where ${\displaystyle U=V=P}$, and for any ${\displaystyle u\in U}$ and ${\displaystyle v\in v}$, ${\displaystyle uv\in E}$ if and only if ${\displaystyle u in the poset ${\displaystyle P}$. By König-Egerváry theorem, there is a matching ${\displaystyle M\subseteq E}$ and a vertex set ${\displaystyle C}$ such that every edge in ${\displaystyle E}$ is adjacent to at least a vertex in ${\displaystyle C}$, and ${\displaystyle |M|=|C|}$. Denote ${\displaystyle |M|=|C|=m}$. Let ${\displaystyle A}$ be the set of uncovered elements in poset ${\displaystyle P}$, i.e., the elements of ${\displaystyle P}$ that do not correspond to any vertex in ${\displaystyle C}$. Clearly, ${\displaystyle |A|\geq n-m}$. We claim that ${\displaystyle A\subseteq P}$ is an antichain. By contradiction, assume there exists ${\displaystyle x,y\in A}$ such that ${\displaystyle x. Then, by the definition of ${\displaystyle G}$, there exist ${\displaystyle u_{x}\in U,v_{x}\in V}$ which corresponds to ${\displaystyle x}$, and ${\displaystyle u_{y}\in U,v_{y}\in V}$ which corresponds to ${\displaystyle y}$, such that ${\displaystyle u_{x}v_{y}\in E}$. But since ${\displaystyle A}$ includes only those elements whose corresponding vertices are not in ${\displaystyle C}$, none of ${\displaystyle u_{x},v_{x},u_{y},v_{y}}$ can be in ${\displaystyle C}$, which contradicts the fact that ${\displaystyle C}$ is a vertex cover of ${\displaystyle G}$ that every edges in ${\displaystyle G}$ are adjacent to at least a vertex in ${\displaystyle C}$. Let ${\displaystyle B}$ be a family of chains formed by including ${\displaystyle u}$ and ${\displaystyle v}$ in the same chain whenever ${\displaystyle uv\in M}$. A moment thought would tell us that the number of chains in ${\displaystyle B}$ is equal to the unmatched vertices in ${\displaystyle U}$ (or ${\displaystyle V}$). Thus, ${\displaystyle |B|=n-m}$. Altogether, we construct an antichain of size ${\displaystyle |A|\geq n-m}$ and partition the poset ${\displaystyle P}$ into ${\displaystyle |B|=n-m}$ disjoint chains. The theorem is proved.
${\displaystyle \square }$

### Application: Erdős-Szekeres Theorem

Let ${\displaystyle (a_{1},a_{2},\ldots ,a_{n})}$ be a sequence of ${\displaystyle n}$ distinct real numbers.A subsequence of ${\displaystyle (a_{1},a_{2},\ldots ,a_{n})}$ is an ${\displaystyle (a_{i_{1}},a_{i_{2}},\ldots ,a_{i_{k}})}$, with ${\displaystyle i_{1}.

A sequence ${\displaystyle (a_{1},a_{2},\ldots ,a_{n})}$ is increasing if ${\displaystyle a_{1}, and decreasing if ${\displaystyle a_{1}>a_{2}>\cdots >a_{n}}$.

Recall that the Erdős-Szekeres theorem states the existence of long increasing subsequence or decreasing subsequence. Last time we prove this by the pigeonhole principle. Now we use the Dilworth's theorem to prove it, which is also the original proof due to Erdős-Szekeres.

 Erdős-Szekeres Theorem A sequence of more than ${\displaystyle mn}$ different real numbers must contain either an increasing subsequence of length ${\displaystyle m+1}$, or a decreasing subsequence of length ${\displaystyle n+1}$.
Proof by Dilworth's theorem
 (Original proof of Erdős-Szekeres) Let ${\displaystyle (a_{1},a_{2},\ldots ,a_{N})}$ be the sequence of ${\displaystyle N>mn}$ distinct real numbers. Define the poset as ${\displaystyle P=\{(i,a_{i})\mid i=1,2,\ldots ,N\}}$ and ${\displaystyle (i,a_{i})\leq (j,a_{j})}$ if and only if ${\displaystyle i\leq j}$ and ${\displaystyle a_{i}\leq a_{j}}$. A chain ${\displaystyle (i_{1},a_{i_{1}})<(i_{2},a_{i_{2}})<\cdots <(i_{k},a_{i_{k}})}$ must have ${\displaystyle i_{1} and ${\displaystyle a_{i_{1}}. Thus, each chain correspond to an increasing subsequence. Let ${\displaystyle (j_{1},a_{j_{1}}),(j_{2},a_{j_{2}}),\cdots ,(j_{k},a_{j_{k}})}$ be an antichain. Without loss of generality, we can assume that ${\displaystyle j_{1}. The only case that these elements are non-comparable is that ${\displaystyle a_{j_{1}}>a_{j_{2}}>\cdots >a_{j_{k}}}$, otherwise if ${\displaystyle a_{j_{s}} for some ${\displaystyle s, then ${\displaystyle (j_{s},a_{j_{s}})<(j_{t},a_{j_{t}})}$, which contradicts the fact that it is an antichain. Thus, each antichain corresponds to a decreasing subsequence. If ${\displaystyle P}$ has an antichain of size ${\displaystyle n+1}$, then ${\displaystyle (a_{1},a_{2},\ldots ,a_{N})}$ has a decreasing subsequence of size ${\displaystyle n+1}$, and we are done. Alternatively, if the largest antichain in ${\displaystyle P}$ is of size at most ${\displaystyle n}$, then by Dilworth's theorem, ${\displaystyle P}$ can be partitioned into no more than ${\displaystyle n}$ disjoint chains, due to pigeonhole principle, one of which must be of length ${\displaystyle m+1}$, which means ${\displaystyle (a_{1},a_{2},\ldots ,a_{N})}$ has an increasing subsequence of size ${\displaystyle m+1}$.
${\displaystyle \square }$

### Application: Hall's Theorem

To recognize the power of Dilworth's theorem, we show that it contains Hall's theorem as a special case!

 Hall's Theorem The sets ${\displaystyle S_{1},S_{2},\ldots ,S_{m}}$ have a system of distinct representatives (SDR) if and only if ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|\geq |I|}$ for all ${\displaystyle I\subseteq \{1,2,\ldots ,m\}}$.
Proof by Dilworth's theorem
 As we discussed before, the necessity of Hall's condition for the existence of SDR is easy. We prove its sufficiency by Dilworth's theorem. Denote ${\displaystyle X=\bigcup _{i=1}^{m}S_{i}}$. Construct a poset ${\displaystyle P}$ by letting ${\displaystyle P=X\cup \{S_{1},S_{2},\ldots ,S_{m}\}}$ and ${\displaystyle x for all ${\displaystyle x\in S_{i}}$. There are no other comparabilities. It is obvious that ${\displaystyle X}$ is an antichain. We claim it is also the largest one. To prove this, let ${\displaystyle A}$ be an arbitrary antichain, and let ${\displaystyle I=\{i\mid S_{i}\in A\}}$. Then ${\displaystyle A}$ contains no elements of ${\displaystyle \bigcup _{i\in I}S_{i}}$, since if ${\displaystyle x\in A}$ and ${\displaystyle x\in S_{i}\in A}$, then ${\displaystyle x and ${\displaystyle A}$ cannot be an antichain. Thus, ${\displaystyle |A|\leq |I|+|X|-\left|\bigcup _{i\in I}S_{i}\right|}$ and by Hall's condition ${\displaystyle \left|\bigcup _{i\in I}S_{i}\right|\geq |I|}$, thus ${\displaystyle |A|\leq |X|}$, as claimed. Now, Dilworth's theorem implies that ${\displaystyle P}$ can be partitioned into ${\displaystyle |X|}$ chains. Since ${\displaystyle X}$ is an antichain and each chain can pass though an antichain on at most one element, each of the ${\displaystyle |X|}$ chains contain precisely one ${\displaystyle x\in X}$. And since ${\displaystyle \{S_{1},\ldots ,S_{m}\}}$ is also an antichain, each of these ${\displaystyle |X|}$ chains contain at most one ${\displaystyle S_{i}}$. Altogether, the ${\displaystyle |X|}$ chains are in the form: ${\displaystyle \{x_{1},S_{1}\},\{x_{2},S_{2}\},\ldots ,\{x_{m},S_{m}\},\{x_{m+1}\}\ldots ,\{x_{|X|}\}}$. Since the only comparabilities in our posets are ${\displaystyle x\in S_{i}}$ and the above chains are disjoint, we have ${\displaystyle x_{1}\in S_{1},x_{2}\in S_{2},\ldots ,x_{m}\in S_{m}}$ as an SDR.
${\displaystyle \square }$

## References

• van Lin and Wilson. A course in combinatorics. Cambridge Press. Chapter 5, 6.