随机算法 (Spring 2013)/Moment and Deviation and 组合数学 (Spring 2013)/Sieve methods: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
 
imported>Etone
(Created page with "== Principle of Inclusion-Exclusion == Let <math>A</math> and <math>B</math> be two finite sets. The cardinality of their union is :<math>|A\cup B|=|A|+|B|-{\color{Blue}|A\cap B|…")
 
Line 1: Line 1:
= Stable marriage =
== Principle of Inclusion-Exclusion ==
Suppose that there are <math>n</math> men and <math>n</math> women. Every man has a preference list of women, which can be represented as a permutation of <math>[n]</math>. Similarly, every women has a preference list of men, which is also a permutation of <math>[n]</math>A ''marriage'' is a 1-1 correspondence between men and women. The [http://en.wikipedia.org/wiki/Stable_marriage_problem '''stable marriage problem'''] or '''stable matching problem''' (SMP) is to find a marriage which is ''stable'' in the following sense:
Let <math>A</math> and <math>B</math> be two finite sets. The cardinality of their union is
:There is no such a man and a woman who are not married to each other but prefer each other to their current partners.
:<math>|A\cup B|=|A|+|B|-{\color{Blue}|A\cap B|}</math>.
For three sets <math>A</math>, <math>B</math>, and <math>C</math>, the cardinality of the union of these three sets is computed as
:<math>|A\cup B\cup C|=|A|+|B|+|C|-{\color{Blue}|A\cap B|}-{\color{Blue}|A\cap C|}-{\color{Blue}|B\cap C|}+{\color{Red}|A\cap B\cap C|}</math>.
This is illustrated by the following figure.
::[[Image:Inclusion-exclusion.png|200px|border|center]]


The famous '''proposal algorithm''' (求婚算法) solves this problem by finding a stable marriage. The algorithm is described as follows:
Generally, the '''Principle of Inclusion-Exclusion''' states the rule for computing the union of <math>n</math> finite sets <math>A_1,A_2,\ldots,A_n</math>, such that
:Each round (called a '''proposal''')
{{Equation|
:* An unmarried man proposes to the most desirable woman according to his preference list who has not already rejected him.
<math>
:* Upon receiving his proposal, the woman accepts the proposal if:
\begin{align}
::# she's not married; or
\left|\bigcup_{i=1}^nA_i\right|
::# her current partner is less desirable than the proposing man according to her preference list. (Her current partner then becomes available again.)
&=
\sum_{I\subseteq\{1,\ldots,n\}}(-1)^{|I|-1}\left|\bigcap_{i\in I}A_i\right|.
\end{align}
</math>
}}
-----
 
In combinatorial enumeration, the Principle of Inclusion-Exclusion is usually applied in its complement form.
 
Let <math>A_1,A_2,\ldots,A_n\subseteq U</math> be subsets of some finite set <math>U</math>. Here <math>U</math> is some universe of combinatorial objects, whose cardinality is easy to calculate (e.g. all strings, tuples, permutations), and each <math>A_i</math> contains the objects with some specific property (e.g. a "pattern") which we want to avoid. The problem is to count the number of objects without any of the <math>n</math> properties. We write <math>\bar{A_i}=U-A_i</math>. The number of objects without any of the properties <math>A_1,A_2,\ldots,A_n</math> is
{{Equation|
<math>
\begin{align}
\left|\bar{A_1}\cap\bar{A_2}\cap\cdots\cap\bar{A_n}\right|=\left|U-\bigcup_{i=1}^nA_i\right|
&=
|U|+\sum_{I\subseteq\{1,\ldots,n\}}(-1)^{|I|}\left|\bigcap_{i\in I}A_i\right|.
\end{align}
</math>
}}
For an <math>I\subseteq\{1,2,\ldots,n\}</math>, we denote
:<math>A_I=\bigcap_{i\in I}A_i</math>
with the convention that <math>A_\emptyset=U</math>. The above equation is stated as:
{{Theorem|Principle of Inclusion-Exclusion|
:Let <math>A_1,A_2,\ldots,A_n</math> be a family of subsets of <math>U</math>. Then the number of elements of <math>U</math> which lie in none of the subsets <math>A_i</math> is
::<math>\sum_{I\subseteq\{1,\ldots, n\}}(-1)^{|I|}|A_I|</math>.
}}
 
Let <math>S_k=\sum_{|I|=k}|A_I|\,</math>. Conventionally, <math>S_0=|A_\emptyset|=|U|</math>. The principle of inclusion-exclusion can be expressed as
{{Equation|<math>
S_0-S_1+S_2+\cdots+(-1)^nS_n.
</math>
}}
 
=== Surjections ===
In the twelvefold way, we discuss the counting problems incurred by the mappings <math>f:N\rightarrow M</math>. The basic case is that elements from both <math>N</math> and <math>M</math> are distinguishable. In this case, it is easy to count the number of arbitrary mappings (which is <math>m^n</math>) and the number of injective (one-to-one) mappings (which is <math>(m)_n</math>), but the number of surjective is difficult. Here we apply the principle of inclusion-exclusion to count the number of surjective (onto) mappings.
{{Theorem|Theorem|
:The number of surjective mappings from an <math>n</math>-set to an <math>m</math>-set is given by
::<math>\sum_{k=1}^m(-1)^{m-k}{m\choose k}k^n</math>.
}}
{{Proof|
Let <math>U=\{f:[n]\rightarrow[m]\}</math> be the set of mappings from <math>[n]</math> to <math>[m]</math>. Then <math>|U|=m^n</math>.
 
For <math>i\in[m]</math>, let <math>A_i</math> be the set of mappings <math>f:[n]\rightarrow[m]</math> that none of <math>j\in[n]</math> is mapped to <math>i</math>, i.e. <math>A_i=\{f:[n]\rightarrow[m]\setminus\{i\}\}</math>, thus <math>|A_i|=(m-1)^n</math>.
 
More generally, for <math>I\subseteq [m]</math>, <math>A_I=\bigcap_{i\in I}A_i</math> contains the mappings <math>f:[n]\rightarrow[m]\setminus I</math>. And <math>|A_I|=(m-|I|)^n\,</math>.
 
A mapping <math>f:[n]\rightarrow[m]</math> is surjective if <math>f</math> lies in none of <math>A_i</math>. By the principle of inclusion-exclusion, the number of surjective <math>f:[n]\rightarrow[m]</math> is
:<math>\sum_{I\subseteq[m]}(-1)^{|I|}\left|A_I\right|=\sum_{I\subseteq[m]}(-1)^{|I|}(m-|I|)^n=\sum_{j=0}^m(-1)^j{m\choose j}(m-j)^n</math>.
Let <math>k=m-j</math>. The theorem is proved.
}}


The algorithm terminates when the last available woman receives a proposal. The algorithm returns a marriage, because it is easy to see that:
Recall that, in the twelvefold way, we establish a relation between surjections and partitions.
:once a woman is proposed to, she gets married and stays as married (and will only switch to more desirable men.)


It can be seen that this algorithm always finds a stable marriage:
* Surjection to ordered partition:
:If to the contrary, there is a man <math>A</math> and a woman <math>b</math> prefer each other than their current partners <math>a</math> (<math>A</math>'s wife) and <math>B</math> (<math>b</math>'s husband), then <math>A</math> must have proposed to <math>b</math> before he proposed to <math>a</math>, by which time <math>b</math> must either be available or be with a worse man (because her current partner <math>B</math> is worse than <math>A</math>), which means <math>b</math> must have accepted <math>A</math>'s proposal.
:For a surjective <math>f:[n]\rightarrow[m]</math>, <math>(f^{-1}(0),f^{-1}(1),\ldots,f^{-1}(m-1))</math> is an '''ordered partition''' of <math>[n]</math>.
* Ordered partition to surjection:
:For an ordered <math>m</math>-partition <math>(B_0,B_1,\ldots, B_{m-1})</math> of <math>[n]</math>, we can define a function <math>f:[n]\rightarrow[m]</math> by letting <math>f(i)=j</math> if and only if <math>i\in B_j</math>. <math>f</math> is surjective since as a partition, none of <math>B_i</math> is empty.


Our interest is the average-case performance of this algorithm, which is measured by the expected number of proposals, assuming that each man/woman has a uniformly random permutation as his/her preference list.
Therefore, we have a one-to-one correspondence between surjective mappings from an <math>n</math>-set to an <math>m</math>-set and the ordered <math>m</math>-partitions of an <math>n</math>-set.


Apply the '''principle of deferred decisions''', each man can be seen as that at each time, sampling a uniformly random woman from the ones who have not already rejected him, and proposing to her. This can only be more efficient than sampling a uniformly and independently random woman to propose. All <math>n</math> men are proposing to uniformly and independently random woman, thus it can be seen as proposals (regardless which men they are from) are sent to women uniformly and independently at random. The algorithm ends when all <math>n</math> women have received a proposal. Due to our analysis of the coupon collector problem, the expected number of proposals is <math>O(n\ln n)</math>.
The Stirling number of the second kind <math>S(n,m)</math> is the number of <math>m</math>-partitions of an <math>n</math>-set. There are <math>m!</math> ways to order an <math>m</math>-partition, thus the number of surjective mappings <math>f:[n]\rightarrow[m]</math> is <math>m! S(n,m)</math>. Combining with what we have proved for surjections, we give the following result for the Stirling number of the second kind.


= Tail Inequalities =
{{Theorem|Proposition|
When applying probabilistic analysis, we often want a bound in form of <math>\Pr[X\ge t]<\epsilon</math> for some random variable <math>X</math> (think that <math>X</math> is a cost such as running time of a randomized algorithm). We call this a '''tail bound''', or a '''tail inequality'''.
:<math>S(n,m)=\frac{1}{m!}\sum_{k=1}^m(-1)^{m-k}{m\choose k}k^n</math>.
}}


Besides directly computing the probability <math>\Pr[X\ge t]</math>, we want to have some general way of estimating tail probabilities from some measurable information regarding the random variables.
=== Derangements ===
We now count the number of bijections from a set to itself with no fixed points. This is the '''derangement problem'''.


==Markov's Inequality==
For a permutation <math>\pi</math> of <math>\{1,2,\ldots,n\}</math>, a '''fixed point''' is such an <math>i\in\{1,2,\ldots,n\}</math> that <math>\pi(i)=i</math>.
A [http://en.wikipedia.org/wiki/Derangement '''derangement'''] of <math>\{1,2,\ldots,n\}</math> is a permutation of <math>\{1,2,\ldots,n\}</math> that has no fixed points.


One of the most natural information about a random variable is its expectation, which is the first moment of the random variable. Markov's inequality draws a tail bound for a random variable from its expectation.
{{Theorem|Theorem|
{{Theorem
:The number of derangements of <math>\{1,2,\ldots,n\}</math> given by
|Theorem (Markov's Inequality)|
::<math>n!\sum_{k=0}^n\frac{(-1)^k}{k!}\approx \frac{n!}{\mathrm{e}}</math>.
:Let <math>X</math> be a random variable assuming only nonnegative values. Then, for all <math>t>0</math>,
::<math>\begin{align}
\Pr[X\ge t]\le \frac{\mathbf{E}[X]}{t}.
\end{align}</math>
}}
}}
{{Proof| Let <math>Y</math> be the indicator such that
{{Proof|
:<math>\begin{align}
Let <math>U</math> be the set of all permutations of <math>\{1,2,\ldots,n\}</math>. So <math>|U|=n!</math>.
Y &=
\begin{cases}
1 & \mbox{if }X\ge t,\\
0 & \mbox{otherwise.}
\end{cases}
\end{align}</math>


It holds that <math>Y\le\frac{X}{t}</math>. Since <math>Y</math> is 0-1 valued, <math>\mathbf{E}[Y]=\Pr[Y=1]=\Pr[X\ge t]</math>. Therefore,
Let <math>A_i</math> be the set of permutations with fixed point <math>i</math>; so <math>|A_i|=(n-1)!</math>. More generally, for any <math>I\subseteq \{1,2,\ldots,n\}</math>, <math>A_I=\bigcap_{i\in I}A_i</math>, and <math>|A_I|=(n-|I|)!</math>, since permutations in <math>A_I</math> fix every point in <math>I</math> and permute the remaining points arbitrarily. A permutation is a derangement if and only if it lies in none of the sets <math>A_i</math>. So the number of derangements is
:<math>
:<math>\sum_{I\subseteq\{1,2,\ldots,n\}}(-1)^{|I|}(n-|I|)!=\sum_{k=0}^n(-1)^k{n\choose k}(n-k)!=n!\sum_{k=0}^n\frac{(-1)^k}{k!}.</math>
\Pr[X\ge t]
By Taylor's series,
=
:<math>\frac{1}{\mathrm{e}}=\sum_{k=0}^\infty\frac{(-1)^k}{k!}=\sum_{k=0}^n\frac{(-1)^k}{k!}\pm o\left(\frac{1}{n!}\right)</math>.
\mathbf{E}[Y]
It is not hard to see that <math>n!\sum_{k=0}^n\frac{(-1)^k}{k!}</math> is the closest integer to <math>\frac{n!}{\mathrm{e}}</math>.
\le
\mathbf{E}\left[\frac{X}{t}\right]
=\frac{\mathbf{E}[X]}{t}.
</math>
}}
}}


===Example (from Las Vegas to Monte Carlo)===
Therefore, there are about <math>\frac{1}{\mathrm{e}}</math> fraction of all permutations with no fixed points.
Let <math>A</math> be a Las Vegas randomized algorithm for a decision problem <math>f</math>, whose expected running time is within <math>T(n)</math> on any input of size <math>n</math>. We transform <math>A</math> to a Monte Carlo randomized algorithm <math>B</math> with bounded one-sided error as follows:
 
:<math>B(x)</math>:
=== Permutations with restricted positions ===
:*Run <math>A(x)</math> for <math>2T(n)</math> long where <math>n</math> is the size of <math>x</math>.
We introduce a general theory of counting permutations with restricted positions. In the derangement problem, we count the number of permutations that <math>\pi(i)\neq i</math>. We now generalize to the problem of counting permutations which avoid a set of arbitrarily specified positions.  
:*If <math>A(x)</math> returned within <math>2T(n)</math> time, then return what <math>A(x)</math> just returned, else return 1.


Since <math>A</math> is Las Vegas, its output is always correct, thus <math>B(x)</math> only errs when it returns 1, thus the error is one-sided. The error probability is bounded by the probability that <math>A(x)</math> runs longer than <math>2T(n)</math>. Since the expected running time of <math>A(x)</math> is at most <math>T(n)</math>, due to Markov's inequality,  
It is traditionally described using terminology from the game of chess. Let <math>B\subseteq \{1,\ldots,n\}\times \{1,\ldots,n\}</math>, called a '''board'''.  As illustrated below, we can think of <math>B</math> as a chess board, with the positions in <math>B</math> marked by "<math>\times</math>".
{{Chess diagram small
|
|
|=
8 |__|xx|xx|__|xx|__|__|xx|=
7 |xx|__|__|xx|__|__|xx|__|=
6 |xx|__|xx|xx|__|xx|xx|__|=
5 |__|xx|__|__|xx|__|xx|__|=
4 |xx|__|__|__|xx|xx|xx|__|=
3 |__|xx|__|xx|__|__|__|xx|=
2 |__|__|xx|__|xx|__|__|xx|=
1 |xx|__|__|xx|__|xx|__|__|=
a b c d e f g h
|
}}
For a permutation <math>\pi</math> of <math>\{1,\ldots,n\}</math>, define the '''graph''' <math>G_\pi(V,E)</math> as
:<math>
:<math>
\Pr[\mbox{the running time of }A(x)\ge2T(n)]\le\frac{\mathbf{E}[\mbox{running time of }A(x)]}{2T(n)}\le\frac{1}{2},
\begin{align}
G_\pi &= \{(i,\pi(i))\mid i\in \{1,2,\ldots,n\}\}.
\end{align}
</math>
</math>
thus the error probability is bounded.
This can also be viewed as a set of marked positions on a chess board. Each row and each column has only one marked position, because <math>\pi</math> is a permutation. Thus, we can identify each <math>G_\pi</math> as a placement of <math>n</math> rooks (“城堡”,规则同中国象棋里的“车”) without attacking each other.


=== Generalization ===
For example, the following is the <math>G_\pi</math> of such <math>\pi</math> that <math>\pi(i)=i</math>.
For any random variable <math>X</math>, for an arbitrary non-negative real function <math>h</math>, the <math>h(X)</math> is a non-negative random variable. Applying Markov's inequality, we directly have that
{{Chess diagram small
:<math>
|
\Pr[h(X)\ge t]\le\frac{\mathbf{E}[h(X)]}{t}.
|
|=
8 |rl|__|__|__|__|__|__|__|=
7 |__|rl|__|__|__|__|__|__|=
6 |__|__|rl|__|__|__|__|__|=
5 |__|__|__|rl|__|__|__|__|=
4 |__|__|__|__|rl|__|__|__|=
3 |__|__|__|__|__|rl|__|__|=
2 |__|__|__|__|__|__|rl|__|=
1 |__|__|__|__|__|__|__|rl|=
a b c d e f g h
|
}}
Now define
:<math>\begin{align}
N_0 &= \left|\left\{\pi\mid B\cap G_\pi=\emptyset\right\}\right|\\
r_k &= \mbox{number of }k\mbox{-subsets of }B\mbox{ such that no two elements have a common coordinate}\\
&=\left|\left\{S\in{B\choose k} \,\bigg|\, \forall (i_1,j_1),(i_2,j_2)\in S, i_1\neq i_2, j_1\neq j_2 \right\}\right|
\end{align}
</math>
</math>
Interpreted in chess game,
* <math>B</math>: a set of marked positions in an <math>[n]\times [n]</math> chess board.
* <math>N_0</math>: the number of ways of placing <math>n</math> non-attacking rooks on the chess board such that none of these rooks lie in <math>B</math>.
* <math>r_k</math>: number of ways of placing <math>k</math> non-attacking rooks on <math>B</math>.


This trivial application of Markov's inequality gives us a powerful tool for proving tail inequalities. With the function <math>h</math> which extracts more information about the random variable, we can prove sharper tail inequalities.
Our goal is to count <math>N_0</math> in terms of <math>r_k</math>. This gives the number of permutations avoid all positions in a <math>B</math>.


== Variance ==
{{Theorem|Theorem|
{{Theorem
:<math>N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!</math>.
|Definition (variance)|
:The '''variance''' of a random variable <math>X</math> is defined as
::<math>\begin{align}
\mathbf{Var}[X]=\mathbf{E}\left[(X-\mathbf{E}[X])^2\right]=\mathbf{E}\left[X^2\right]-(\mathbf{E}[X])^2.
\end{align}</math>
:The '''standard deviation''' of random variable <math>X</math> is
::<math>
\delta[X]=\sqrt{\mathbf{Var}[X]}.
</math>
}}
}}
{{Proof|
For each <math>i\in[n]</math>, let <math>A_i=\{\pi\mid (i,\pi(i))\in B\}</math> be the set of permutations <math>\pi</math> whose <math>i</math>-th position is in <math>B</math>.
<math>N_0</math> is the number of permutations avoid all positions in <math>B</math>. Thus, our goal is to count the number of permutations <math>\pi</math> in none of <math>A_i</math> for <math>i\in [n]</math>.
For each <math>I\subseteq [n]</math>, let <math>A_I=\bigcap_{i\in I}A_i</math>, which is the set of permutations <math>\pi</math> such that <math>(i,\pi(i))\in B</math> for all <math>i\in I</math>. Due to the principle of inclusion-exclusion,
:<math>N_0=\sum_{I\subseteq [n]} (-1)^{|I|}|A_I|=\sum_{k=0}^n(-1)^k\sum_{I\in{[n]\choose k}}|A_I|</math>.


We have seen that due to the linearity of expectations, the expectation of the sum of variable is the sum of the expectations of the variables. It is natural to ask whether this is true for variances. We find that the variance of sum has an extra term called covariance.
The next observation is that  
:<math>\sum_{I\in{[n]\choose k}}|A_I|=r_k(n-k)!</math>,
because we can count both sides by first placing <math>k</math> non-attacking rooks on <math>B</math> and placing <math>n-k</math> additional non-attacking rooks on <math>[n]\times [n]</math> in <math>(n-k)!</math> ways.  


{{Theorem
Therefore,
|Definition (covariance)|
:<math>N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!</math>.
:The '''covariance''' of two random variables <math>X</math> and <math>Y</math> is
::<math>\begin{align}
\mathbf{Cov}(X,Y)=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right].
\end{align}</math>
}}
}}


We have the following theorem for the variance of sum.
====Derangement problem====
We use the above general method to solve the derange problem again.


{{Theorem
Take <math>B=\{(1,1),(2,2),\ldots,(n,n)\}</math> as the chess board.  A derangement <math>\pi</math> is a placement of <math>n</math> non-attacking rooks such that none of them is in <math>B</math>.
|Theorem|
{{Chess diagram small
:For any two random variables <math>X</math> and <math>Y</math>,
|
::<math>\begin{align}
|
\mathbf{Var}[X+Y]=\mathbf{Var}[X]+\mathbf{Var}[Y]+2\mathbf{Cov}(X,Y).
|=
\end{align}</math>
8 |xx|__|__|__|__|__|__|__|=
:Generally, for any random variables <math>X_1,X_2,\ldots,X_n</math>,
7 |__|xx|__|__|__|__|__|__|=
::<math>\begin{align}
6 |__|__|xx|__|__|__|__|__|=
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i]+\sum_{i\neq j}\mathbf{Cov}(X_i,X_j).
5 |__|__|__|xx|__|__|__|__|=
\end{align}</math>
4 |__|__|__|__|xx|__|__|__|=
3 |__|__|__|__|__|xx|__|__|=
2 |__|__|__|__|__|__|xx|__|=
1 |__|__|__|__|__|__|__|xx|=
a b c d e f g h
|
}}
}}
{{Proof| The equation for two variables is directly due to the definition of variance and covariance. The equation for <math>n</math> variables can be deduced from the equation for two variables.
Clearly, the number of ways of placing <math>k</math> non-attacking rooks on <math>B</math> is <math>r_k={n\choose k}</math>. We want to count <math>N_0</math>, which gives the number of ways of placing <math>n</math> non-attacking rooks such that none of these rooks lie in <math>B</math>.
 
By the above theorem
:<math>
N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!=\sum_{k=0}^n(-1)^k{n\choose k}(n-k)!=\sum_{k=0}^n(-1)^k\frac{n!}{k!}=n!\sum_{k=0}^n(-1)^k\frac{1}{k!}\approx\frac{n!}{e}.
</math>
 
====Problème des ménages====
Suppose that in a banquet, we want to seat <math>n</math> couples at a circular table, satisfying the following constraints:
* Men and women are in alternate places.
* No one sits next to his/her spouse.
 
In how many ways can this be done?
 
(For convenience, we assume that every seat at the table marked differently so that rotating the seats clockwise or anti-clockwise will end up with a '''different''' solution.)
 
First, let the <math>n</math> ladies find their seats. They may either sit at the odd numbered seats or even numbered seats, in either case, there are <math>n!</math> different orders. Thus, there are <math>2(n!)</math> ways to seat the <math>n</math> ladies.
 
After sitting the wives, we label the remaining <math>n</math> places clockwise as <math>0,1,\ldots, n-1</math>. And a seating of the <math>n</math> husbands is given by a permutation <math>\pi</math> of <math>[n]</math> defined as follows. Let <math>\pi(i)</math> be the seat of the husband of he lady sitting at the <math>i</math>-th place.
 
It is easy to see that <math>\pi</math> satisfies that <math>\pi(i)\neq i</math> and <math>\pi(i)\not\equiv i+1\pmod n</math>, and every permutation <math>\pi</math> with these properties gives a feasible seating of the <math>n</math> husbands. Thus, we only need to count the number of permutations <math>\pi</math> such that <math>\pi(i)\not\equiv i, i+1\pmod n</math>.
 
Take <math>B=\{(0,0),(1,1),\ldots,(n-1,n-1), (0,1),(1,2),\ldots,(n-2,n-1),(n-1,0)\}</math> as the chess board.  A permutation <math>\pi</math> which defines a way of seating the husbands, is a placement of <math>n</math> non-attacking rooks such that none of them is in <math>B</math>.  
{{Chess diagram small
|
|
|=
8 |xx|xx|__|__|__|__|__|__|=
7 |__|xx|xx|__|__|__|__|__|=
6 |__|__|xx|xx|__|__|__|__|=
5 |__|__|__|xx|xx|__|__|__|=
4 |__|__|__|__|xx|xx|__|__|=
3 |__|__|__|__|__|xx|xx|__|=
2 |__|__|__|__|__|__|xx|xx|=
1 |xx|__|__|__|__|__|__|xx|=
a b c d e f g h
|
}}
}}
We need to compute <math>r_k</math>, the number of ways of placing <math>k</math> non-attacking rooks on <math>B</math>. For our choice of <math>B</math>, <math>r_k</math> is the number of ways of choosing <math>k</math> points, no two consecutive, from a collection of <math>2n</math> points arranged in a circle.


We will see that when random variables are independent, the variance of sum is equal to the sum of variances. To prove this, we first establish a very useful result regarding the expectation of multiplicity.
We first see how to do this in a ''line''.
 
{{Theorem|Lemma|
{{Theorem
:The number of ways of choosing <math>k</math> ''non-consecutive'' objects from a collection of <math>m</math> objects arranged in a ''line'', is <math>{m-k+1\choose k}</math>.
|Theorem|
:For any two independent random variables <math>X</math> and <math>Y</math>,
::<math>\begin{align}
\mathbf{E}[X\cdot Y]=\mathbf{E}[X]\cdot\mathbf{E}[Y].
\end{align}</math>
}}
}}
{{Proof|
{{Proof|
:<math>
We draw a line of <math>m-k</math> black points, and then insert <math>k</math> red points into the <math>m-k+1</math> spaces between the black points (including the beginning and end).
::<math>
\begin{align}
\begin{align}
\mathbf{E}[X\cdot Y]
&\sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \\
&=
&\qquad\qquad\qquad\quad\Downarrow\\
\sum_{x,y}xy\Pr[X=x\wedge Y=y]\\
&\sqcup \, \bullet \,\, {\color{Red}\bullet} \, \bullet \,\, {\color{Red}\bullet} \, \bullet \, \sqcup \, \bullet \,\, {\color{Red}\bullet}\, \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \,\, {\color{Red}\bullet}
&=
\sum_{x,y}xy\Pr[X=x]\Pr[Y=y]\\
&=
\sum_{x}x\Pr[X=x]\sum_{y}y\Pr[Y=y]\\
&=
\mathbf{E}[X]\cdot\mathbf{E}[Y].
\end{align}
\end{align}
</math>
</math>
This gives us a line of <math>m</math> points, and the red points specifies the chosen objects, which are non-consecutive. The mapping is 1-1 correspondence.
There are <math>{m-k+1\choose k}</math> ways of placing <math>k</math> red points into <math>m-k+1</math> spaces.
}}
}}


With the above theorem, we can show that the covariance of two independent variables is always zero.
The problem of choosing non-consecutive objects in a circle can be reduced to the case that the objects are in a line.


{{Theorem
{{Theorem|Lemma|
|Theorem|
:The number of ways of choosing <math>k</math> ''non-consecutive'' objects from a collection of <math>m</math> objects arranged in a ''circle'', is <math>\frac{m}{m-k}{m-k\choose k}</math>.
:For any two independent random variables <math>X</math> and <math>Y</math>,
::<math>\begin{align}
\mathbf{Cov}(X,Y)=0.
\end{align}</math>
}}
{{Proof|
:<math>\begin{align}
\mathbf{Cov}(X,Y)
&=\mathbf{E}\left[(X-\mathbf{E}[X])(Y-\mathbf{E}[Y])\right]\\
&= \mathbf{E}\left[X-\mathbf{E}[X]\right]\mathbf{E}\left[Y-\mathbf{E}[Y]\right] &\qquad(\mbox{Independence})\\
&=0.
\end{align}</math>
}}
}}
{{Proof|
Let <math>f(m,k)</math> be the desired number; and let <math>g(m,k)</math> be the number of ways of choosing <math>k</math> non-consecutive points from <math>m</math> points arranged in a circle, next coloring the <math>k</math> points red, and then coloring one of the uncolored point blue.
Clearly, <math>g(m,k)=(m-k)f(m,k)</math>.


We then have the following theorem for the variance of the sum of pairwise independent random variables.
But we can also compute <math>g(m,k)</math> as follows:
* Choose one of the <math>m</math> points and color it blue. This gives us <math>m</math> ways.
* Cut the circle to make a line of <math>m-1</math> points by removing the blue point.
* Choose <math>k</math> non-consecutive points from the line of <math>m-1</math> points and color them red. This gives <math>{m-k\choose k}</math> ways due to the previous lemma.


{{Theorem
Thus, <math>g(m,k)=m{m-k\choose k}</math>. Therefore we have the desired number <math>f(m,k)=\frac{m}{m-k}{m-k\choose k}</math>.
|Theorem|
:For '''pairwise''' independent random variables <math>X_1,X_2,\ldots,X_n</math>,
::<math>\begin{align}
\mathbf{Var}\left[\sum_{i=1}^n X_i\right]=\sum_{i=1}^n\mathbf{Var}[X_i].
\end{align}</math>
}}
}}


;Remark
By the above lemma, we have that <math>r_k=\frac{2n}{2n-k}{2n-k\choose k}</math>. Then apply the theorem of counting permutations with restricted positions,
:The theorem holds for '''pairwise''' independent random variables, a much weaker independence requirement than the '''mutual''' independence. This makes the variance-based probability tools work even for weakly random cases. We will see what it exactly means in the future lectures.
 
=== Variance of binomial distribution ===
For a Bernoulli trial with parameter <math>p</math>.
:<math>
:<math>
X=\begin{cases}
N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!=\sum_{k=0}^n(-1)^k\frac{2n}{2n-k}{2n-k\choose k}(n-k)!.
1& \mbox{with probability }p\\
0& \mbox{with probability }1-p
\end{cases}
</math>
</math>
The variance is
 
This gives the number of ways of seating the <math>n</math> husbands ''after the ladies are seated''. Recall that there are <math>2n!</math> ways of seating the <math>n</math> ladies. Thus, the total number of ways of seating <math>n</math> couples as required by problème des ménages is  
:<math>
:<math>
\mathbf{Var}[X]=\mathbf{E}[X^2]-(\mathbf{E}[X])^2=\mathbf{E}[X]-(\mathbf{E}[X])^2=p-p^2=p(1-p).
2n!\sum_{k=0}^n(-1)^k\frac{2n}{2n-k}{2n-k\choose k}(n-k)!.
</math>
</math>


Let <math>Y</math> be a binomial random variable with parameter <math>n</math> and <math>p</math>, i.e. <math>Y=\sum_{i=1}^nY_i</math>, where <math>Y_i</math>'s are i.i.d. Bernoulli trials with parameter <math>p</math>. The variance is
=== The Euler totient function ===
Two integers <math>m, n</math> are said to be '''relatively prime''' if their greatest common diviser <math>\mathrm{gcd}(m,n)=1</math>. For a positive integer <math>n</math>, let <math>\phi(n)</math> be the number of positive integers from <math>\{1,2,\ldots,n\}</math> that are relative prime to <math>n</math>. This function, called the Euler <math>\phi</math> function or '''the Euler totient function''', is fundamental in number theory.
 
We now derive a formula for this function by using the principle of inclusion-exclusion.
{{Theorem|Theorem (The Euler totient function)|
Suppose <math>n</math> is divisible by precisely <math>r</math> different primes, denoted <math>p_1,\ldots,p_r</math>. Then
:<math>\phi(n)=n\prod_{i=1}^r\left(1-\frac{1}{p_i}\right)</math>.
}}
{{Proof|
Let <math>U=\{1,2,\ldots,n\}</math> be the universe. The number of positive integers from <math>U</math> which is divisible by some <math>p_{i_1},p_{i_2},\ldots,p_{i_s}\in\{p_1,\ldots,p_r\}</math>, is <math>\frac{n}{p_{i_1}p_{i_2}\cdots p_{i_s}}</math>.  
 
<math>\phi(n)</math> is the number of integers from <math>U</math> which is not divisible by any <math>p_1,\ldots,p_r</math>.
By principle of inclusion-exclusion,
:<math>
:<math>
\begin{align}
\begin{align}
\mathbf{Var}[Y]
\phi(n)
&=
&=n+\sum_{k=1}^r(-1)^k\sum_{1\le i_1<i_2<\cdots <i_k\le n}\frac{n}{p_{i_1}p_{i_2}\cdots p_{i_k}}\\
\mathbf{Var}\left[\sum_{i=1}^nY_i\right]\\
&=n-\sum_{1\le i\le n}\frac{n}{p_i}+\sum_{1\le i<j\le n}\frac{n}{p_i p_j}-\sum_{1\le i<j<k\le n}\frac{n}{p_{i} p_{j} p_{k}}+\cdots + (-1)^r\frac{n}{p_{1}p_{2}\cdots p_{r}}\\
&=
&=n\left(1-\sum_{1\le i\le n}\frac{1}{p_i}+\sum_{1\le i<j\le n}\frac{1}{p_i p_j}-\sum_{1\le i<j<k\le n}\frac{1}{p_{i} p_{j} p_{k}}+\cdots + (-1)^r\frac{1}{p_{1}p_{2}\cdots p_{r}}\right)\\
\sum_{i=1}^n\mathbf{Var}\left[Y_i\right] &\qquad (\mbox{Independence})\\
&=n\prod_{i=1}^r\left(1-\frac{1}{p_i}\right).
&=
\sum_{i=1}^np(1-p) &\qquad (\mbox{Bernoulli})\\
&=
p(1-p)n.
\end{align}
\end{align}
</math>
== Chebyshev's inequality ==
With the information of the expectation and variance of a random variable, one can derive a stronger tail bound known as Chebyshev's Inequality.
{{Theorem
|Theorem (Chebyshev's Inequality)|
:For any <math>t>0</math>,
::<math>\begin{align}
\Pr\left[|X-\mathbf{E}[X]| \ge t\right] \le \frac{\mathbf{Var}[X]}{t^2}.
\end{align}</math>
}}
{{Proof| Observe that
:<math>\Pr[|X-\mathbf{E}[X]| \ge t] = \Pr[(X-\mathbf{E}[X])^2 \ge t^2].</math>
Since <math>(X-\mathbf{E}[X])^2</math> is a nonnegative random variable, we can apply Markov's inequality, such that
:<math>
\Pr[(X-\mathbf{E}[X])^2 \ge t^2] \le
\frac{\mathbf{E}[(X-\mathbf{E}[X])^2]}{t^2}
=\frac{\mathbf{Var}[X]}{t^2}.
</math>
</math>
}}
}}


=Median Selection=
== Möbius inversion ==
The [http://en.wikipedia.org/wiki/Selection_algorithm selection problem] is the problem of finding the <math>k</math>th smallest element in a set <math>S</math>. A typical case of selection problem is finding the '''median'''.


{{Theorem
=== Posets ===
|Definition|
A '''partially ordered set''' or '''poset''' for short is a set <math>P</math> together with a binary relation denoted <math>\le_P</math> (or just <math>\le</math> if no confusion is caused), satisfying
:The median of a set <math>S</math> is the <math>(\lceil n/2\rceil)</math>th element in the sorted order of <math>S</math>.
* (''reflexivity'') For all <math>x\in P, x\le x</math>.
}}
* (''antisymmetry'') If <math>x\le y</math> and <math>y\le x</math>, then <math>x=y</math>.
* (''transitivity'') If <math>x\le y</math> and <math>y\le z</math>, then <math>x\le z</math>.


The median can be found in <math>O(n\log n)</math> time by sorting. There is a linear-time deterministic algorithm, [http://en.wikipedia.org/wiki/Selection_algorithm#Linear_general_selection_algorithm_-_.22Median_of_Medians_algorithm.22 "median of medians" algorithm], which is quite sophisticated. Here we introduce a much simpler randomized algorithm which also runs in linear time.
We say two elements <math>x</math> and <math>y</math> are '''comparable''' if <math>x\le y</math> or <math>y\le x</math>; otherwise <math>x</math> and <math>y</math> are '''incomparable'''.


== The LazySelect algorithm ==
;Notation
We introduce a randomized median selection algorithm called '''LazySelect''', which is a variant on a randomized algorithm due to [http://en.wikipedia.org/wiki/Robert_Floyd Floyd] and [http://en.wikipedia.org/wiki/Ron_Rivest Rivest]
* <math>x\ge y</math> means <math>y\le x</math>.
* <math>x<y</math> means <math>x\le y</math> and <math>x\neq y</math>.
* <math>x>y</math> means <math>y<x</math>.


The idea of this algorithm is random sampling. For a set <math>S</math>, let <math>m\in S</math> denote the median. We observe that if we can find two elements <math>d,u\in S</math> satisfying the following properties:
=== The Möbius function===
# The median is between <math>d</math> and <math>u</math> in the sorted order, i.e. <math>d\le m\le u</math>;
Let <math>P</math> be a finite poset. Consider functions in form of <math>\alpha:P\times P\rightarrow\mathbb{R}</math> defined over domain <math>P\times P</math>. It is convenient to treat such functions as matrices whose rows and columns are indexed by <math>P</math>.
# The total number of elements between <math>d</math> and <math>u</math> is small, specially for <math>C=\{x\in S\mid d\le x\le u\}</math>, <math>|C|=o(n/\log n)</math>.


Provided <math>d</math> and <math>u</math> with these two properties, within linear time, we can compute the ranks of <math>d</math> in <math>S</math>, construct <math>C</math>, and sort <math>C</math>. Therefore, the median <math>m</math> of <math>S</math> can be picked from <math>C</math> in linear time.
;Incidence algebra of poset
:Let
::<math>I(P)=\{\alpha:P\times P\rightarrow\mathbb{R}\mid \alpha(x,y)=0\text{ for all }x\not\le_P y\}</math>  
:be the class of <math>\alpha</math> such that <math>\alpha(x,y)</math> is non-zero only for <math>x\le_P y</math>.


So how can we select such elements <math>d</math> and <math>u</math> from <math>S</math>? Certainly sorting <math>S</math> would give us the elements, but isn't that exactly what we want to avoid in the first place?
:Treating <math>\alpha</math> as matrix, it is trivial to see that <math>I(P)</math> is closed under addition and scalar multiplication, that is,
:* if <math>\alpha,\beta\in I(P)</math> then <math>\alpha+\beta\in I(P)</math>;
:* if <math>\alpha\in I(P)</math> then <math>c\alpha\in I(P)</math> for any <math>c\in\mathbb{R}</math>;
:where <math>\alpha,\beta</math> are treated as matrices.


Observe that <math>d</math> and <math>u</math> are only asked to roughly satisfy some constraints. This hints us maybe we can construct a ''sketch'' of <math>S</math> which is small enough to sort cheaply and roughly represents <math>S</math>, and then pick <math>d</math> and <math>u</math> from this sketch. We construct the sketch by randomly sampling a relatively small number of elements from <math>S</math>. Then the strategy of algorithm is outlined by:
:With this spirit, it is natural to define the matrix multiplication in <math>I(P)</math>. For <math>\alpha,\beta\in I(P)</math>,
* Sample a set <math>R</math> of elements from <math>S</math>.  
::<math>(\alpha\beta)(x,y)=\sum_{z\in P}\alpha(x,z)\beta(z,y)=\sum_{x\le z\le y}\alpha(x,z)\beta(z,y)</math>.
* Sort <math>R</math> and choose <math>d</math> and <math>u</math> somewhere around the median of <math>R</math>.
:The second equation is due to that for <math>\alpha,\beta\in I(P)</math>, for all <math>z</math> other than <math>x\le z\le y</math>, <math>\alpha(x,z)\beta(z,y)</math> is zero.
* If <math>d</math> and <math>u</math> have the desirable properties, we can compute the median in linear time, or otherwise the algorithm fails.
:By the transitivity of relation <math>\le_P</math>, it is also easy to prove that <math>I(P)</math> is closed under matrix multiplication (the detailed proof is left as an exercise). Therefore, <math>I(P)</math> is closed under addition, scalar multiplication and matrix multiplication, so we have an algebra <math>I(P)</math>, called '''incidence algebra''', over functions on <math>P\times P</math>.


The parameters to be fixed are: the size of <math>R</math> (small enough to sort in linear time and large enough to contain sufficient information of <math>S</math>); and the order of <math>d</math> and <math>u</math> in <math>R</math> (not too close to have <math>m</math> between them, and not too far away to have <math>C</math> sortable in linear time).
;Zeta function and Möbius function
:A special function in <math>I(P)</math> is the so-called '''zeta function''' <math>\zeta</math>, defined as
::<math>\zeta(x,y)=\begin{cases}1&\text{if }x\le_P y,\\0 &\text{otherwise.}\end{cases}</math>
:As a matrix (or more accurately, as an element of the incidence algebra), <math>\zeta</math> is invertible and its inversion, denoted by <math>\mu</math>, is called the '''Möbius function'''. More precisely, <math>\mu</math> is also in the incidence algebra <math>I(P)</math>, and <math>\mu\zeta=I</math> where <math>I</math> is the identity matrix (the identity of the incidence algebra <math>I(P)</math>).


We choose the size of <math>R</math> as <math>n^{3/4}</math>, and <math>d</math> and <math>u</math> are within <math>\sqrt{n}</math> range around the median of <math>R</math>.
There is an equivalent explicit definition of Möbius function.
{{Theorem|Definition (Möbius function)|
:<math>\mu(x,y)=\begin{cases}
-\sum_{x\le z< y}\mu(x,z)&\text{if }x<y,\\
1&\text{if }x=y,\\
0&\text{if }x\not\le y.
\end{cases}
</math>
}}


{{Theorem
To see the equivalence between this definition and the inversion of zeta function, we may have the following proposition, which is proved by directly evaluating <math>\mu\zeta</math>.
|LazySelect|
{{Theorem|Proposition|
'''Input:''' a set <math>S</math> of <math>n</math> elements over totally ordered domain.
:For any <math>x,y\in P</math>,
# Pick a multi-set <math>R</math> of <math>\left\lceil n^{3/4}\right\rceil</math> elements in <math>S</math>, chosen independently and uniformly at random with replacement, and sort <math>R</math>.
::<math>\sum_{x\le z\le y}\mu(x,z)=\begin{cases}1 &\text{if }x=y,\\
# Let <math>d</math> be the <math>\left\lfloor\frac{1}{2}n^{3/4}-\sqrt{n}\right\rfloor</math>-th smallest element in <math>R</math>, and let <math>u</math> be the <math>\left\lceil\frac{1}{2}n^{3/4}+\sqrt{n}\right\rceil</math>-th smallest element in <math>R</math>.
0 &\text{otherwise.}\end{cases}</math>
# Construct <math>C=\{x\in S\mid d\le x\le u\}</math> and compute the ranks <math>r_d=|\{x\in S\mid x<d\}|</math> and <math>r_u=|\{x\in S\mid x<u\}|</math>.
}}
# If <math>r_d>\frac{n}{2}</math> or <math>r_u<\frac{n}{2}</math> or <math>|C|>4n^{3/4}</math> then return FAIL.
{{Proof|
# Sort <math>C</math> and return the <math>\left(\left\lfloor\frac{n}{2}\right\rfloor-r_d+1\right)</math>th element in the sorted order of <math>C</math>.
It holds that
:<math>(\mu\zeta)(x,y)=\sum_{x\le z\le y}\mu(x,z)\zeta(z,y)=\sum_{x\le z\le y}\mu(x,z)</math>.
On the other hand, <math>\mu\zeta=I</math>, i.e.  
:<math>(\mu\zeta)(x,y)=\begin{cases}1 &\text{if }x=y,\\
0 &\text{otherwise.}\end{cases}</math>
The proposition follows.
}}
}}
Note that <math>\mu(x,y)=\sum_{x\le z\le y}\mu(x,z)-\sum_{x\le z< y}\mu(x,z)</math>, which gives the above inductive definition of Möbius function.


"Sample with replacement" (有放回采样) means that after sampling an element, we put the element back to the set. In this way, each sampled element is independently and identically distributed (''i.i.d'') (独立同分布). In the above algorithm, this is for our convenience of analysis.
=== Computing Möbius functions===
 
We consider the simple poset <math>P=[n]</math>, where <math>\le</math> is the total order. It follows directly from the recursive definition of Möbius function that
== Analysis ==
:<math>\mu(i,j)=\begin{cases}1 & \text{if }i=j,\\
The algorithm always terminates in linear time because each line of the algorithm costs at most linear time. The last three line guarantees that the algorithm returns the correct median if it does not fail.
-1 & \text{if }i+1=j,\\
 
0 & \text{otherwise.}
We then only need to bound the probability that the algorithm returns a FAIL. Let <math>m\in S</math> be the median of <math>S</math>. By Line 4, we know that the algorithm returns a FAIL if and only if at least one of the following events occurs:
\end{cases}
* <math>\mathcal{E}_1: Y=|\{x\in R\mid x\le m\}|<\frac{1}{2}n^{3/4}-\sqrt{n}</math>;
</math>
* <math>\mathcal{E}_2: Z=|\{x\in R\mid x\ge m\}|<\frac{1}{2}n^{3/4}-\sqrt{n}</math>;
* <math>\mathcal{E}_3: |C|>4n^{3/4}</math>.


<math>\mathcal{E}_3</math> directly follows the third condition in Line 4. <math>\mathcal{E}_1</math> and <math>\mathcal{E}_2</math> are a bit tricky. The first condition in Line 4 is that <math>r_d>\frac{n}{2}</math>, which looks not exactly the same as <math>\mathcal{E}_1</math>, but both <math>\mathcal{E}_1</math> and that <math>r_d>\frac{n}{2}</math> are equivalent to the same event: the <math>\left\lfloor\frac{1}{2}n^{3/4}-\sqrt{n}\right\rfloor</math>-th smallest element in <math>R</math> is greater than <math>m</math>, thus they are actually equivalent. Similarly, <math>\mathcal{E}_2</math> is equivalent to the second condition of Line 4.
Usually for general posets, it is difficult to directly compute the Möbius function from its definition. We introduce a rule helping us compute the Möbius function by decomposing the poset into posets with simple structures.


We now bound the probabilities of these events one by one.
{{Theorem|Theorem (the product rule)|
 
: Let <math>P</math> and <math>Q</math> be two finite posets, and <math>P\times Q</math> be the poset resulted from Cartesian product of <math>P</math> and <math>Q</math>, where for all <math>(x,y), (x',y')\in P\times Q</math>, <math>(x,y)\le (x',y')</math> if and only if <math>x\le x'</math> and <math>y\le y'</math>. Then
{{Theorem
::<math>\mu_{P\times Q}((x,y),(x',y'))=\mu_P(x,x')\mu_Q(y,y')</math>.
|Lemma 1|
:<math>\Pr[\mathcal{E}_1]\le \frac{1}{4}n^{-1/4}</math>.
}}
}}
{{Proof| Let <math>X_i</math> be the <math>i</math>th sampled element in Line 1 of the algorithm. Let <math>Y_i</math> be a indicator random variable such that
{{Proof|
:<math>
We use the recursive definition
Y_i=
:<math>\mu(x,y)=\begin{cases}
\begin{cases}
-\sum_{x\le z< y}\mu(x,z)&\text{if }x<y,\\
1 & \mbox{if }X_i\le m,\\
1&\text{if }x=y,\\
0 & \mbox{otherwise.}
0&\text{if }x\not\le y.  
\end{cases}
\end{cases}
</math>
</math>
It is obvious that <math>Y=\sum_{i=1}^{n^{3/4}}Y_i</math>, where <math>Y</math> is as defined in <math>\mathcal{E}_1</math>. For every <math>X_i</math>, there are <math>\left\lceil\frac{n}{2}\right\rceil</math> elements in <math>S</math> that are less than or equal to the median. The probability that <math>Y_i=1</math> is
to prove the equation in the theorem.
:<math>
p=\Pr[Y_i=1]=\Pr[X_i\le m]=\frac{1}{n}\left\lceil\frac{n}{2}\right\rceil,
</math>
which is within the range of <math>\left[\frac{1}{2},\frac{1}{2}+\frac{1}{2n}\right]</math>. Thus
:<math>
\mathbf{E}[Y]=n^{3/4}p\ge \frac{1}{2}n^{3/4}.
</math>


The event <math>\mathcal{E}_1</math> is defined as that <math>Y<\frac{1}{2}n^{3/4}-\sqrt{n}</math>.
If <math>(x,y)=(x',y')</math>, then <math>x=x'</math> and <math>y=y'</math>. It is easy to see that both sides of the equation are 1. If <math>(x,y)\not\le(x',y')</math>, then either <math>x\not\le x'</math> or <math>y\not\le y'</math>. It is also easy to see that both sides are 0.


Note that <math>Y_i</math>'s are Bernoulli trials, and <math>Y</math> is the sum of <math>n^{3/4}</math> Bernoulli trials, which follows binomial distribution with parameters <math>n^{3/4}</math> and <math>p</math>. Thus, the variance is
The only remaining case is that <math>(x,y)<(x',y')</math>, in which case either <math>x<x'</math> or <math>y<y'</math>.  
:<math>\mathbf{Var}[Y]=n^{3/4}p(1-p)\le \frac{1}{4}n^{3/4}.
:<math>
\begin{align}
\sum_{(x,y)\le (u,v)\le (x',y')}\mu_P(x,u)\mu_Q(y,v)
&=\left(\sum_{x\le u\le x'}\mu_P(x,u)\right)\left(\sum_{y\le v\le y'}\mu_Q(y,v)\right)=I(x,x')I(y,y')=0,
\end{align}
</math>
</math>
where the last two equations are due to the proposition for <math>\mu</math>. Thus
:<math>\mu_P(x,x')\mu_Q(y,y')=-\sum_{(x,y)\le (u,v)< (x',y')}\mu_P(x,u)\mu_Q(y,v)</math>.


Applying Chebyshev's inequality,
By induction, assume that the equation <math>\mu_{P\times Q}((x,y),(u,v))=\mu_P(x,u)\mu_Q(y,v)</math> is true for all <math>(u,v)< (x',y')</math>. Then
:<math>
:<math>
\begin{align}
\begin{align}
\Pr[\mathcal{E}_1]
\mu_{P\times Q}((x,y),(x',y'))
&=
&=-\sum_{(x,y)\le (u,v)< (x',y')}\mu_{P\times Q}((x,y),(u,v))\\
\Pr\left[Y<\frac{1}{2}n^{3/4}-\sqrt{n}\right]\\
&=-\sum_{(x,y)\le (u,v)< (x',y')}\mu_P(x,u)\mu_Q(y,v)\\
&\le
&=\mu_P(x,x')\mu_Q(y,y'),
\Pr\left[|Y-\mathbf{E}[Y]|>\sqrt{n}\right]\\
&\le
\frac{\mathbf{Var}[Y]}{n}\\
&\le\frac{1}{4}n^{-1/4}.
\end{align}
\end{align}
</math>
</math>
which complete the proof.
}}
}}


By a similar analysis, we can obtain the following bound for the event <math>\mathcal{E}_2</math>.
;Poset of subsets
:Consider the poset defined by all subsets of a finite universe <math>U</math>, that is <math>P=2^U</math>, and for <math>S,T\subseteq U</math>, <math>S\le_P T</math> if and only if <math>S\subseteq T</math>.


{{Theorem
{{Theorem| Möbius function for subsets|
|Lemma 2|
:The Möbius function for the above defined poset <math>P</math> is that for <math>S,T\subseteq U</math>,
:<math>\Pr[\mathcal{E}_2]\le \frac{1}{4}n^{-1/4}</math>.
::<math>\mu(S,T)=
\begin{cases}
(-1)^{|T|-|S|} & \text{if }S\subseteq T,\\
0 &\text{otherwise.}
\end{cases}
</math>
}}
}}
{{Proof|
We can equivalently represent each <math>S\subseteq U</math> by a boolean string <math>S\in\{0,1\}^U</math>, where <math>S(x)=1</math> if and only if <math>x\in S</math>.


We now bound the probability of the event <math>\mathcal{E}_3</math>.
For each element <math>x\in U</math>, we can define a poset <math>P_x=\{0, 1\}</math> with <math>0\le 1</math>. By definition of Möbius function, the Möbius function of this elementary poset is given by <math>\mu_x(0,0)=\mu_x(1,1)=1</math>, <math>\mu_x(0,1)=-1</math> and <math>\mu(1,0)=0</math>.


{{Theorem
The poset <math>P</math> of all subsets of <math>U</math> is the Cartesian product of all <math>P_x</math>, <math>x\in U</math>. By the product rule,
|Lemma 3|
:<math>\mu(S,T)=\prod_{x\in U}\mu_x(S(x), T(x))=\prod_{x\in S\atop x\in T}1\prod_{x\not\in S\atop x\not\in T}1\prod_{x\in S\atop x\not\in T}0\prod_{x\not\in S\atop x\in T}(-1)=\begin{cases}
:<math>\Pr[\mathcal{E}_3]\le \frac{1}{2}n^{-1/4}</math>.
(-1)^{|T|-|S|} & \text{if }S\subseteq T,\\
0 &\text{otherwise.}
\end{cases}</math>
}}
}}
{{Proof| The event <math>\mathcal{E}_3</math> is defined as that <math>|C|>4 n^{3/4}</math>, which by the Pigeonhole Principle, implies that at leas one of the following must be true:
* <math>\mathcal{E}_3'</math>: at least <math>2n^{3/4}</math> elements of <math>C</math> is greater than <math>m</math>;
* <math>\mathcal{E}_3''</math>: at least <math>2n^{3/4}</math> elements of <math>C</math> is smaller than <math>m</math>.


We bound the probability that <math>\mathcal{E}_3'</math> occurs; the second will have the same bound by symmetry.
:Note that the poset <math>P</math> is actually the [http://en.wikipedia.org/wiki/Boolean_algebra_(structure) Boolean algebra] of rank <math>|U|</math>. The proof relies only on that the fact that the poset is a Boolean algebra, thus the theorem holds for Boolean algebra posets.


Recall that <math>C</math> is the region in <math>S</math> between <math>d</math> and <math>u</math>. If there are at least <math>2n^{3/4}</math> elements of <math>C</math> greater than the median <math>m</math> of <math>S</math>, then the rank of <math>u</math> in the sorted order of <math>S</math> must be at least <math>\frac{1}{2}n+2n^{3/4}</math> and thus <math>R</math> has at least <math>\frac{1}{2}n^{3/4}-\sqrt{n}</math> samples among the <math>\frac{1}{2}n-2n^{3/4}</math> largest elements in <math>S</math>.
;Posets of divisors
:Consider the poset defined by all devisors of a positive integer <math>n</math>, that is <math>P=\{a>0\mid a|n\}</math>, and for <math>a,b\in P</math>, <math>a\le_P b</math> if and only if <math>a|b\,</math>.


Let <math>X_i\in\{0,1\}</math> indicate whether the <math>i</math>th sample is among the <math>\frac{1}{2}n-2n^{3/4}</math> largest elements in <math>S</math>. Let <math>X=\sum_{i=1}^{n^{3/4}}X_i</math> be the number of samples in <math>R</math> among the <math>\frac{1}{2}n-2n^{3/4}</math> largest elements in <math>S</math>.
{{Theorem|Möbius function for divisors|
It holds that
:The Möbius function for the above defined poset <math>P</math> is that for <math>a,b>0</math> that <math>a|n</math> and <math>b|n</math>,
:<math>p=\Pr[X_i=1]=\frac{\frac{1}{2}n-2n^{3/4}}{n}=\frac{1}{2}-2n^{-1/4}</math>.
::<math>\mu(a,b)=
\begin{cases}
(-1)^{r} & \text{if }\frac{b}{a}\text{ is the product of }r\text{ distinct primes},\\
0 &\text{otherwise, i.e. if }a\not|b\text{ or }\frac{b}{a}\text{ is not squarefree.}
\end{cases}
</math>
}}
{{Proof|
Denote <math>n=p_1^{n_1}p_2^{n_2}\cdots p_k^{n_k}</math>. Represent <math>n</math> by a tuple <math>(n_1,n_2,\ldots,n_k)</math>. Every <math>a\in P</math> corresponds in this way to a tuple <math>(a_1,a_2,\ldots,a_k)</math> with <math>a_i\le n_i</math> for all <math>1\le i\le k</math>.


<math>X</math> is a binomial random variable with  
Let <math>P_i=\{1,2,\ldots,n_i\}</math> be the poset with <math>\le</math> being the total order. The poset <math>P</math> of divisors of <math>n</math> is thus isomorphic to the poset constructed by the Cartesian product of all <math>P_i</math>, <math>1\le i\le k</math>. Then
:<math>
\mathbf{E}[X]=n^{3/4}p=\frac{1}{2}n^{3/4}-2\sqrt{n},
</math>
and
:<math>
\mathbf{Var}[X]=n^{3/4}p(1-p)=\frac{1}{4}n^{3/4}-4n^{1/4}<\frac{1}{4}n^{3/4}.
</math>
Applying Chebyshev's inequality,
:<math>
:<math>
\begin{align}
\begin{align}
\Pr[\mathcal{E}_3']
\mu(a,b)
&=
&=\prod_{1\le i\le k}\mu(a_i,b_i)=\prod_{1\le i\le k\atop a_i=b_i}1\prod_{1\le i\le k\atop b_i-a_i=1}(-1)\prod_{1\le i\le k\atop b_i-a_i\not\in\{0,1\}}0
\Pr\left[X\ge\frac{1}{2}n^{3/4}-\sqrt{n}\right]\\
=\begin{cases}
&\le
(-1)^{\sum_{i}(b_i-a_i)} & \text{if all }b_i-a_i\in\{0,1\},\\
\Pr\left[|X-\mathbf{E}[X]|\ge\sqrt{n}\right]\\
0 &\text{otherwise.}
&\le
\end{cases}\\
\frac{\mathbf{Var}[X]}{n}\\
&=\begin{cases}
&\le\frac{1}{4}n^{-1/4}.
(-1)^{r} & \text{if }\frac{b}{a}\text{ is the product of }r\text{ distinct primes},\\
0 &\text{otherwise.}
\end{cases}
\end{align}
\end{align}
</math>
</math>
}}


Symmetrically, we have that <math>\Pr[\mathcal{E}_3'']\le\frac{1}{4}n^{-1/4}</math>.
=== Principle of Möbius inversion ===
We now introduce the the famous Möbius inversion formula.
{{Theorem|Möbius inversion formula|
:Let <math>P</math> be a finite poset and <math>\mu</math> its Möbius function. Let <math>f,g:P\rightarrow \mathbb{R}</math>. Then
::<math>\forall x\in P,\,\, g(x)=\sum_{y\le x} f(y)</math>,
:if and only if
::<math>\forall x\in P,\,\, f(x)=\sum_{y\le x}g(y)\mu(y,x)</math>.
}}
The functions <math>f,g:P\rightarrow\mathbb{R}</math> are vectors. Evaluate the matrix multiplications <math>f\zeta</math> and <math>g\mu</math> as follows:
:<math>(f\zeta)(x)=\sum_{y\in P}f(y)\zeta(y,x)=\sum_{y\le x}f(y)</math>,
and
:<math>(g\mu)(x)=\sum_{y\in P}g(y)\mu(y,x)=\sum_{y\le x}g(y)\mu(y,x)</math>.
The Möbius inversion formula is nothing but the following statement
:<math>f\zeta=g\Leftrightarrow f=g\mu</math>,
which is trivially true due to <math>\mu\zeta=I</math> by basic linear algebra.


Applying the union bound
The following dual form of the inversion formula is also useful.
:<math>\Pr[\mathcal{E}_3]\le \Pr[\mathcal{E}_3']+\Pr[\mathcal{E}_3'']\le\frac{1}{2}n^{-1/4}.
{{Theorem|Möbius inversion formula, dual form|
</math>
:Let <math>P</math> be a finite poset and <math>\mu</math> its Möbius function. Let <math>f,g:P\rightarrow \mathbb{R}</math>. Then
::<math>\forall x\in P, \,\, g(x)=\sum_{y{\color{red}\ge} x} f(y)</math>,
: if and only if
::<math>\forall x\in P, \,\, f(x)=\sum_{y{\color{red}\ge} x}\mu(x,y)g(y)</math>.
}}
}}
To prove the dual form, we only need to evaluate the matrix multiplications on left:
:<math>\zeta f=g\Leftrightarrow f=\mu g</math>.


;Principle of Inclusion-Exclusion
:Let <math>A_1,A_2,\ldots,A_n\subseteq U</math>. For any <math>J\subseteq\{1,2,\ldots,n\}</math>,
:*let <math>f(J)</math> be the number of elements that belongs to ''exactly'' the sets <math>A_i, i\in J</math> and to no others, i.e.
:::<math>f(J)=\left|\left(\bigcap_{i\in J}A_i\right)\setminus\left(\bigcup_{i\not\in J}A_i\right)\right|</math>;
:*let <math>g(J)=\left|\bigcap_{i\in J}A_i\right|</math>.
:For any <math>J\subseteq\{1,2,\ldots,n\}</math>, the following relation holds for the above defined <math>f</math> and <math>g</math>:
::<math>g(J)=\sum_{I\supseteq J}f(I)</math>.
:Applying the dual form of the Möbius inversion formula, we have that for any <math>J\subseteq\{1,2,\ldots,n\}</math>,
::<math>f(J)=\sum_{I\supseteq J}\mu(J,I)g(I)=\sum_{I\supseteq J}\mu(J,I)\left|\bigcap_{i\in I}A_i\right|</math>,
:where the Möbius function is for the poset of all subsets of <math>\{1,2,\ldots,n\}</math>, ordered by <math>\subseteq</math>, thus it holds that <math>\mu(J,I)=(-1)^{|I|-|J|}\,</math> for <math>J\subseteq I</math>. Therefore,
::<math>f(J)=\sum_{I\supseteq J}(-1)^{|I|-|J|}\left|\bigcap_{i\in I}A_i\right|</math>.
:We have a formula for the number of elements with exactly those properties <math>A_i, i\in J</math> for any <math>J\subseteq\{1,2,\ldots,n\}</math>. For the special case that <math>J=\emptyset</math>, <math>f(\emptyset)</math> is the number of elements satisfying no property of <math>A_1,A_2,\ldots,A_n</math>, and
::<math>f(\emptyset)=\left|U\setminus\bigcup_iA_i\right|=\sum_{I\subseteq \{1,\ldots,n\}}(-1)^{|I|}\left|\bigcap_{i\in I}A_i\right|</math>
:which gives precisely the Principle of Inclusion-Exclusion.


Combining the three bounds. Applying the union bound to them, the probability that the algorithm returns a FAIL is at most
;Möbius inversion formula for number theory
:<math>
:The number-theoretical Möbius inversion formula is stated as such: Let <math>N</math> be a positive integer,  
\Pr[\mathcal{E}_1]+\Pr[\mathcal{E}_2]+\Pr[\mathcal{E}_3]\le n^{-1/4}.
::<math>g(n)=\sum_{d|n}f(d)\,</math> for all <math>n|N</math>  
</math>
:if and only if
::<math>f(n)=\sum_{d|n}g(d)\mu\left(\frac{n}{d}\right)\,</math> for all <math>n|N</math>,
:where <math>\mu</math> is the [http://en.wikipedia.org/wiki/M%C3%B6bius_function number-theoretical Möbius function], defined as
::<math>\mu(n)=\begin{cases}1 & \text{if }n\text{ is product of an even number of distinct primes,}\\
-1 &\text{if }n\text{ is product of an odd number of distinct primes,}\\
0 &\text{otherwise.}\end{cases}</math>
:The number-theoretical Möbius inversion formula is just a special case of the Möbius inversion formula for posets, when the poset is the set of divisors of <math>N</math>, and for any <math>a,b\in P</math>, <math>a\le_P b</math> if <math>a|b</math>.


Therefore the algorithm always terminates in linear time and returns the correct median with high probability.
== Reference ==
* ''Stanley,'' Enumerative Combinatorics, Volume 1, Chapter 2.
* ''van Lin and Wilson'', A course in combinatorics, Chapter 10, 25.

Revision as of 05:56, 20 March 2013

Principle of Inclusion-Exclusion

Let [math]\displaystyle{ A }[/math] and [math]\displaystyle{ B }[/math] be two finite sets. The cardinality of their union is

[math]\displaystyle{ |A\cup B|=|A|+|B|-{\color{Blue}|A\cap B|} }[/math].

For three sets [math]\displaystyle{ A }[/math], [math]\displaystyle{ B }[/math], and [math]\displaystyle{ C }[/math], the cardinality of the union of these three sets is computed as

[math]\displaystyle{ |A\cup B\cup C|=|A|+|B|+|C|-{\color{Blue}|A\cap B|}-{\color{Blue}|A\cap C|}-{\color{Blue}|B\cap C|}+{\color{Red}|A\cap B\cap C|} }[/math].

This is illustrated by the following figure.

Generally, the Principle of Inclusion-Exclusion states the rule for computing the union of [math]\displaystyle{ n }[/math] finite sets [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math], such that

[math]\displaystyle{ \begin{align} \left|\bigcup_{i=1}^nA_i\right| &= \sum_{I\subseteq\{1,\ldots,n\}}(-1)^{|I|-1}\left|\bigcap_{i\in I}A_i\right|. \end{align} }[/math]


In combinatorial enumeration, the Principle of Inclusion-Exclusion is usually applied in its complement form.

Let [math]\displaystyle{ A_1,A_2,\ldots,A_n\subseteq U }[/math] be subsets of some finite set [math]\displaystyle{ U }[/math]. Here [math]\displaystyle{ U }[/math] is some universe of combinatorial objects, whose cardinality is easy to calculate (e.g. all strings, tuples, permutations), and each [math]\displaystyle{ A_i }[/math] contains the objects with some specific property (e.g. a "pattern") which we want to avoid. The problem is to count the number of objects without any of the [math]\displaystyle{ n }[/math] properties. We write [math]\displaystyle{ \bar{A_i}=U-A_i }[/math]. The number of objects without any of the properties [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] is

[math]\displaystyle{ \begin{align} \left|\bar{A_1}\cap\bar{A_2}\cap\cdots\cap\bar{A_n}\right|=\left|U-\bigcup_{i=1}^nA_i\right| &= |U|+\sum_{I\subseteq\{1,\ldots,n\}}(-1)^{|I|}\left|\bigcap_{i\in I}A_i\right|. \end{align} }[/math]

For an [math]\displaystyle{ I\subseteq\{1,2,\ldots,n\} }[/math], we denote

[math]\displaystyle{ A_I=\bigcap_{i\in I}A_i }[/math]

with the convention that [math]\displaystyle{ A_\emptyset=U }[/math]. The above equation is stated as:

Principle of Inclusion-Exclusion
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math] be a family of subsets of [math]\displaystyle{ U }[/math]. Then the number of elements of [math]\displaystyle{ U }[/math] which lie in none of the subsets [math]\displaystyle{ A_i }[/math] is
[math]\displaystyle{ \sum_{I\subseteq\{1,\ldots, n\}}(-1)^{|I|}|A_I| }[/math].

Let [math]\displaystyle{ S_k=\sum_{|I|=k}|A_I|\, }[/math]. Conventionally, [math]\displaystyle{ S_0=|A_\emptyset|=|U| }[/math]. The principle of inclusion-exclusion can be expressed as

[math]\displaystyle{ S_0-S_1+S_2+\cdots+(-1)^nS_n. }[/math]

Surjections

In the twelvefold way, we discuss the counting problems incurred by the mappings [math]\displaystyle{ f:N\rightarrow M }[/math]. The basic case is that elements from both [math]\displaystyle{ N }[/math] and [math]\displaystyle{ M }[/math] are distinguishable. In this case, it is easy to count the number of arbitrary mappings (which is [math]\displaystyle{ m^n }[/math]) and the number of injective (one-to-one) mappings (which is [math]\displaystyle{ (m)_n }[/math]), but the number of surjective is difficult. Here we apply the principle of inclusion-exclusion to count the number of surjective (onto) mappings.

Theorem
The number of surjective mappings from an [math]\displaystyle{ n }[/math]-set to an [math]\displaystyle{ m }[/math]-set is given by
[math]\displaystyle{ \sum_{k=1}^m(-1)^{m-k}{m\choose k}k^n }[/math].
Proof.

Let [math]\displaystyle{ U=\{f:[n]\rightarrow[m]\} }[/math] be the set of mappings from [math]\displaystyle{ [n] }[/math] to [math]\displaystyle{ [m] }[/math]. Then [math]\displaystyle{ |U|=m^n }[/math].

For [math]\displaystyle{ i\in[m] }[/math], let [math]\displaystyle{ A_i }[/math] be the set of mappings [math]\displaystyle{ f:[n]\rightarrow[m] }[/math] that none of [math]\displaystyle{ j\in[n] }[/math] is mapped to [math]\displaystyle{ i }[/math], i.e. [math]\displaystyle{ A_i=\{f:[n]\rightarrow[m]\setminus\{i\}\} }[/math], thus [math]\displaystyle{ |A_i|=(m-1)^n }[/math].

More generally, for [math]\displaystyle{ I\subseteq [m] }[/math], [math]\displaystyle{ A_I=\bigcap_{i\in I}A_i }[/math] contains the mappings [math]\displaystyle{ f:[n]\rightarrow[m]\setminus I }[/math]. And [math]\displaystyle{ |A_I|=(m-|I|)^n\, }[/math].

A mapping [math]\displaystyle{ f:[n]\rightarrow[m] }[/math] is surjective if [math]\displaystyle{ f }[/math] lies in none of [math]\displaystyle{ A_i }[/math]. By the principle of inclusion-exclusion, the number of surjective [math]\displaystyle{ f:[n]\rightarrow[m] }[/math] is

[math]\displaystyle{ \sum_{I\subseteq[m]}(-1)^{|I|}\left|A_I\right|=\sum_{I\subseteq[m]}(-1)^{|I|}(m-|I|)^n=\sum_{j=0}^m(-1)^j{m\choose j}(m-j)^n }[/math].

Let [math]\displaystyle{ k=m-j }[/math]. The theorem is proved.

[math]\displaystyle{ \square }[/math]

Recall that, in the twelvefold way, we establish a relation between surjections and partitions.

  • Surjection to ordered partition:
For a surjective [math]\displaystyle{ f:[n]\rightarrow[m] }[/math], [math]\displaystyle{ (f^{-1}(0),f^{-1}(1),\ldots,f^{-1}(m-1)) }[/math] is an ordered partition of [math]\displaystyle{ [n] }[/math].
  • Ordered partition to surjection:
For an ordered [math]\displaystyle{ m }[/math]-partition [math]\displaystyle{ (B_0,B_1,\ldots, B_{m-1}) }[/math] of [math]\displaystyle{ [n] }[/math], we can define a function [math]\displaystyle{ f:[n]\rightarrow[m] }[/math] by letting [math]\displaystyle{ f(i)=j }[/math] if and only if [math]\displaystyle{ i\in B_j }[/math]. [math]\displaystyle{ f }[/math] is surjective since as a partition, none of [math]\displaystyle{ B_i }[/math] is empty.

Therefore, we have a one-to-one correspondence between surjective mappings from an [math]\displaystyle{ n }[/math]-set to an [math]\displaystyle{ m }[/math]-set and the ordered [math]\displaystyle{ m }[/math]-partitions of an [math]\displaystyle{ n }[/math]-set.

The Stirling number of the second kind [math]\displaystyle{ S(n,m) }[/math] is the number of [math]\displaystyle{ m }[/math]-partitions of an [math]\displaystyle{ n }[/math]-set. There are [math]\displaystyle{ m! }[/math] ways to order an [math]\displaystyle{ m }[/math]-partition, thus the number of surjective mappings [math]\displaystyle{ f:[n]\rightarrow[m] }[/math] is [math]\displaystyle{ m! S(n,m) }[/math]. Combining with what we have proved for surjections, we give the following result for the Stirling number of the second kind.

Proposition
[math]\displaystyle{ S(n,m)=\frac{1}{m!}\sum_{k=1}^m(-1)^{m-k}{m\choose k}k^n }[/math].

Derangements

We now count the number of bijections from a set to itself with no fixed points. This is the derangement problem.

For a permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math], a fixed point is such an [math]\displaystyle{ i\in\{1,2,\ldots,n\} }[/math] that [math]\displaystyle{ \pi(i)=i }[/math]. A derangement of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] is a permutation of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] that has no fixed points.

Theorem
The number of derangements of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] given by
[math]\displaystyle{ n!\sum_{k=0}^n\frac{(-1)^k}{k!}\approx \frac{n!}{\mathrm{e}} }[/math].
Proof.

Let [math]\displaystyle{ U }[/math] be the set of all permutations of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math]. So [math]\displaystyle{ |U|=n! }[/math].

Let [math]\displaystyle{ A_i }[/math] be the set of permutations with fixed point [math]\displaystyle{ i }[/math]; so [math]\displaystyle{ |A_i|=(n-1)! }[/math]. More generally, for any [math]\displaystyle{ I\subseteq \{1,2,\ldots,n\} }[/math], [math]\displaystyle{ A_I=\bigcap_{i\in I}A_i }[/math], and [math]\displaystyle{ |A_I|=(n-|I|)! }[/math], since permutations in [math]\displaystyle{ A_I }[/math] fix every point in [math]\displaystyle{ I }[/math] and permute the remaining points arbitrarily. A permutation is a derangement if and only if it lies in none of the sets [math]\displaystyle{ A_i }[/math]. So the number of derangements is

[math]\displaystyle{ \sum_{I\subseteq\{1,2,\ldots,n\}}(-1)^{|I|}(n-|I|)!=\sum_{k=0}^n(-1)^k{n\choose k}(n-k)!=n!\sum_{k=0}^n\frac{(-1)^k}{k!}. }[/math]

By Taylor's series,

[math]\displaystyle{ \frac{1}{\mathrm{e}}=\sum_{k=0}^\infty\frac{(-1)^k}{k!}=\sum_{k=0}^n\frac{(-1)^k}{k!}\pm o\left(\frac{1}{n!}\right) }[/math].

It is not hard to see that [math]\displaystyle{ n!\sum_{k=0}^n\frac{(-1)^k}{k!} }[/math] is the closest integer to [math]\displaystyle{ \frac{n!}{\mathrm{e}} }[/math].

[math]\displaystyle{ \square }[/math]

Therefore, there are about [math]\displaystyle{ \frac{1}{\mathrm{e}} }[/math] fraction of all permutations with no fixed points.

Permutations with restricted positions

We introduce a general theory of counting permutations with restricted positions. In the derangement problem, we count the number of permutations that [math]\displaystyle{ \pi(i)\neq i }[/math]. We now generalize to the problem of counting permutations which avoid a set of arbitrarily specified positions.

It is traditionally described using terminology from the game of chess. Let [math]\displaystyle{ B\subseteq \{1,\ldots,n\}\times \{1,\ldots,n\} }[/math], called a board. As illustrated below, we can think of [math]\displaystyle{ B }[/math] as a chess board, with the positions in [math]\displaystyle{ B }[/math] marked by "[math]\displaystyle{ \times }[/math]".

a b c d e f g h
8 a8 __ b8 cross c8 cross d8 __ e8 cross f8 __ g8 __ h8 cross 8
7 a7 cross b7 __ c7 __ d7 cross e7 __ f7 __ g7 cross h7 __ 7
6 a6 cross b6 __ c6 cross d6 cross e6 __ f6 cross g6 cross h6 __ 6
5 a5 __ b5 cross c5 __ d5 __ e5 cross f5 __ g5 cross h5 __ 5
4 a4 cross b4 __ c4 __ d4 __ e4 cross f4 cross g4 cross h4 __ 4
3 a3 __ b3 cross c3 __ d3 cross e3 __ f3 __ g3 __ h3 cross 3
2 a2 __ b2 __ c2 cross d2 __ e2 cross f2 __ g2 __ h2 cross 2
1 a1 cross b1 __ c1 __ d1 cross e1 __ f1 cross g1 __ h1 __ 1
a b c d e f g h

For a permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ \{1,\ldots,n\} }[/math], define the graph [math]\displaystyle{ G_\pi(V,E) }[/math] as

[math]\displaystyle{ \begin{align} G_\pi &= \{(i,\pi(i))\mid i\in \{1,2,\ldots,n\}\}. \end{align} }[/math]

This can also be viewed as a set of marked positions on a chess board. Each row and each column has only one marked position, because [math]\displaystyle{ \pi }[/math] is a permutation. Thus, we can identify each [math]\displaystyle{ G_\pi }[/math] as a placement of [math]\displaystyle{ n }[/math] rooks (“城堡”,规则同中国象棋里的“车”) without attacking each other.

For example, the following is the [math]\displaystyle{ G_\pi }[/math] of such [math]\displaystyle{ \pi }[/math] that [math]\displaystyle{ \pi(i)=i }[/math].

a b c d e f g h
8 a8 white rook b8 __ c8 __ d8 __ e8 __ f8 __ g8 __ h8 __ 8
7 a7 __ b7 white rook c7 __ d7 __ e7 __ f7 __ g7 __ h7 __ 7
6 a6 __ b6 __ c6 white rook d6 __ e6 __ f6 __ g6 __ h6 __ 6
5 a5 __ b5 __ c5 __ d5 white rook e5 __ f5 __ g5 __ h5 __ 5
4 a4 __ b4 __ c4 __ d4 __ e4 white rook f4 __ g4 __ h4 __ 4
3 a3 __ b3 __ c3 __ d3 __ e3 __ f3 white rook g3 __ h3 __ 3
2 a2 __ b2 __ c2 __ d2 __ e2 __ f2 __ g2 white rook h2 __ 2
1 a1 __ b1 __ c1 __ d1 __ e1 __ f1 __ g1 __ h1 white rook 1
a b c d e f g h

Now define

[math]\displaystyle{ \begin{align} N_0 &= \left|\left\{\pi\mid B\cap G_\pi=\emptyset\right\}\right|\\ r_k &= \mbox{number of }k\mbox{-subsets of }B\mbox{ such that no two elements have a common coordinate}\\ &=\left|\left\{S\in{B\choose k} \,\bigg|\, \forall (i_1,j_1),(i_2,j_2)\in S, i_1\neq i_2, j_1\neq j_2 \right\}\right| \end{align} }[/math]

Interpreted in chess game,

  • [math]\displaystyle{ B }[/math]: a set of marked positions in an [math]\displaystyle{ [n]\times [n] }[/math] chess board.
  • [math]\displaystyle{ N_0 }[/math]: the number of ways of placing [math]\displaystyle{ n }[/math] non-attacking rooks on the chess board such that none of these rooks lie in [math]\displaystyle{ B }[/math].
  • [math]\displaystyle{ r_k }[/math]: number of ways of placing [math]\displaystyle{ k }[/math] non-attacking rooks on [math]\displaystyle{ B }[/math].

Our goal is to count [math]\displaystyle{ N_0 }[/math] in terms of [math]\displaystyle{ r_k }[/math]. This gives the number of permutations avoid all positions in a [math]\displaystyle{ B }[/math].

Theorem
[math]\displaystyle{ N_0=\sum_{k=0}^n(-1)^kr_k(n-k)! }[/math].
Proof.

For each [math]\displaystyle{ i\in[n] }[/math], let [math]\displaystyle{ A_i=\{\pi\mid (i,\pi(i))\in B\} }[/math] be the set of permutations [math]\displaystyle{ \pi }[/math] whose [math]\displaystyle{ i }[/math]-th position is in [math]\displaystyle{ B }[/math].

[math]\displaystyle{ N_0 }[/math] is the number of permutations avoid all positions in [math]\displaystyle{ B }[/math]. Thus, our goal is to count the number of permutations [math]\displaystyle{ \pi }[/math] in none of [math]\displaystyle{ A_i }[/math] for [math]\displaystyle{ i\in [n] }[/math].

For each [math]\displaystyle{ I\subseteq [n] }[/math], let [math]\displaystyle{ A_I=\bigcap_{i\in I}A_i }[/math], which is the set of permutations [math]\displaystyle{ \pi }[/math] such that [math]\displaystyle{ (i,\pi(i))\in B }[/math] for all [math]\displaystyle{ i\in I }[/math]. Due to the principle of inclusion-exclusion,

[math]\displaystyle{ N_0=\sum_{I\subseteq [n]} (-1)^{|I|}|A_I|=\sum_{k=0}^n(-1)^k\sum_{I\in{[n]\choose k}}|A_I| }[/math].

The next observation is that

[math]\displaystyle{ \sum_{I\in{[n]\choose k}}|A_I|=r_k(n-k)! }[/math],

because we can count both sides by first placing [math]\displaystyle{ k }[/math] non-attacking rooks on [math]\displaystyle{ B }[/math] and placing [math]\displaystyle{ n-k }[/math] additional non-attacking rooks on [math]\displaystyle{ [n]\times [n] }[/math] in [math]\displaystyle{ (n-k)! }[/math] ways.

Therefore,

[math]\displaystyle{ N_0=\sum_{k=0}^n(-1)^kr_k(n-k)! }[/math].
[math]\displaystyle{ \square }[/math]

Derangement problem

We use the above general method to solve the derange problem again.

Take [math]\displaystyle{ B=\{(1,1),(2,2),\ldots,(n,n)\} }[/math] as the chess board. A derangement [math]\displaystyle{ \pi }[/math] is a placement of [math]\displaystyle{ n }[/math] non-attacking rooks such that none of them is in [math]\displaystyle{ B }[/math].

a b c d e f g h
8 a8 cross b8 __ c8 __ d8 __ e8 __ f8 __ g8 __ h8 __ 8
7 a7 __ b7 cross c7 __ d7 __ e7 __ f7 __ g7 __ h7 __ 7
6 a6 __ b6 __ c6 cross d6 __ e6 __ f6 __ g6 __ h6 __ 6
5 a5 __ b5 __ c5 __ d5 cross e5 __ f5 __ g5 __ h5 __ 5
4 a4 __ b4 __ c4 __ d4 __ e4 cross f4 __ g4 __ h4 __ 4
3 a3 __ b3 __ c3 __ d3 __ e3 __ f3 cross g3 __ h3 __ 3
2 a2 __ b2 __ c2 __ d2 __ e2 __ f2 __ g2 cross h2 __ 2
1 a1 __ b1 __ c1 __ d1 __ e1 __ f1 __ g1 __ h1 cross 1
a b c d e f g h

Clearly, the number of ways of placing [math]\displaystyle{ k }[/math] non-attacking rooks on [math]\displaystyle{ B }[/math] is [math]\displaystyle{ r_k={n\choose k} }[/math]. We want to count [math]\displaystyle{ N_0 }[/math], which gives the number of ways of placing [math]\displaystyle{ n }[/math] non-attacking rooks such that none of these rooks lie in [math]\displaystyle{ B }[/math].

By the above theorem

[math]\displaystyle{ N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!=\sum_{k=0}^n(-1)^k{n\choose k}(n-k)!=\sum_{k=0}^n(-1)^k\frac{n!}{k!}=n!\sum_{k=0}^n(-1)^k\frac{1}{k!}\approx\frac{n!}{e}. }[/math]

Problème des ménages

Suppose that in a banquet, we want to seat [math]\displaystyle{ n }[/math] couples at a circular table, satisfying the following constraints:

  • Men and women are in alternate places.
  • No one sits next to his/her spouse.

In how many ways can this be done?

(For convenience, we assume that every seat at the table marked differently so that rotating the seats clockwise or anti-clockwise will end up with a different solution.)

First, let the [math]\displaystyle{ n }[/math] ladies find their seats. They may either sit at the odd numbered seats or even numbered seats, in either case, there are [math]\displaystyle{ n! }[/math] different orders. Thus, there are [math]\displaystyle{ 2(n!) }[/math] ways to seat the [math]\displaystyle{ n }[/math] ladies.

After sitting the wives, we label the remaining [math]\displaystyle{ n }[/math] places clockwise as [math]\displaystyle{ 0,1,\ldots, n-1 }[/math]. And a seating of the [math]\displaystyle{ n }[/math] husbands is given by a permutation [math]\displaystyle{ \pi }[/math] of [math]\displaystyle{ [n] }[/math] defined as follows. Let [math]\displaystyle{ \pi(i) }[/math] be the seat of the husband of he lady sitting at the [math]\displaystyle{ i }[/math]-th place.

It is easy to see that [math]\displaystyle{ \pi }[/math] satisfies that [math]\displaystyle{ \pi(i)\neq i }[/math] and [math]\displaystyle{ \pi(i)\not\equiv i+1\pmod n }[/math], and every permutation [math]\displaystyle{ \pi }[/math] with these properties gives a feasible seating of the [math]\displaystyle{ n }[/math] husbands. Thus, we only need to count the number of permutations [math]\displaystyle{ \pi }[/math] such that [math]\displaystyle{ \pi(i)\not\equiv i, i+1\pmod n }[/math].

Take [math]\displaystyle{ B=\{(0,0),(1,1),\ldots,(n-1,n-1), (0,1),(1,2),\ldots,(n-2,n-1),(n-1,0)\} }[/math] as the chess board. A permutation [math]\displaystyle{ \pi }[/math] which defines a way of seating the husbands, is a placement of [math]\displaystyle{ n }[/math] non-attacking rooks such that none of them is in [math]\displaystyle{ B }[/math].

a b c d e f g h
8 a8 cross b8 cross c8 __ d8 __ e8 __ f8 __ g8 __ h8 __ 8
7 a7 __ b7 cross c7 cross d7 __ e7 __ f7 __ g7 __ h7 __ 7
6 a6 __ b6 __ c6 cross d6 cross e6 __ f6 __ g6 __ h6 __ 6
5 a5 __ b5 __ c5 __ d5 cross e5 cross f5 __ g5 __ h5 __ 5
4 a4 __ b4 __ c4 __ d4 __ e4 cross f4 cross g4 __ h4 __ 4
3 a3 __ b3 __ c3 __ d3 __ e3 __ f3 cross g3 cross h3 __ 3
2 a2 __ b2 __ c2 __ d2 __ e2 __ f2 __ g2 cross h2 cross 2
1 a1 cross b1 __ c1 __ d1 __ e1 __ f1 __ g1 __ h1 cross 1
a b c d e f g h

We need to compute [math]\displaystyle{ r_k }[/math], the number of ways of placing [math]\displaystyle{ k }[/math] non-attacking rooks on [math]\displaystyle{ B }[/math]. For our choice of [math]\displaystyle{ B }[/math], [math]\displaystyle{ r_k }[/math] is the number of ways of choosing [math]\displaystyle{ k }[/math] points, no two consecutive, from a collection of [math]\displaystyle{ 2n }[/math] points arranged in a circle.

We first see how to do this in a line.

Lemma
The number of ways of choosing [math]\displaystyle{ k }[/math] non-consecutive objects from a collection of [math]\displaystyle{ m }[/math] objects arranged in a line, is [math]\displaystyle{ {m-k+1\choose k} }[/math].
Proof.

We draw a line of [math]\displaystyle{ m-k }[/math] black points, and then insert [math]\displaystyle{ k }[/math] red points into the [math]\displaystyle{ m-k+1 }[/math] spaces between the black points (including the beginning and end).

[math]\displaystyle{ \begin{align} &\sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \, \sqcup \\ &\qquad\qquad\qquad\quad\Downarrow\\ &\sqcup \, \bullet \,\, {\color{Red}\bullet} \, \bullet \,\, {\color{Red}\bullet} \, \bullet \, \sqcup \, \bullet \,\, {\color{Red}\bullet}\, \, \bullet \, \sqcup \, \bullet \, \sqcup \, \bullet \,\, {\color{Red}\bullet} \end{align} }[/math]

This gives us a line of [math]\displaystyle{ m }[/math] points, and the red points specifies the chosen objects, which are non-consecutive. The mapping is 1-1 correspondence. There are [math]\displaystyle{ {m-k+1\choose k} }[/math] ways of placing [math]\displaystyle{ k }[/math] red points into [math]\displaystyle{ m-k+1 }[/math] spaces.

[math]\displaystyle{ \square }[/math]

The problem of choosing non-consecutive objects in a circle can be reduced to the case that the objects are in a line.

Lemma
The number of ways of choosing [math]\displaystyle{ k }[/math] non-consecutive objects from a collection of [math]\displaystyle{ m }[/math] objects arranged in a circle, is [math]\displaystyle{ \frac{m}{m-k}{m-k\choose k} }[/math].
Proof.

Let [math]\displaystyle{ f(m,k) }[/math] be the desired number; and let [math]\displaystyle{ g(m,k) }[/math] be the number of ways of choosing [math]\displaystyle{ k }[/math] non-consecutive points from [math]\displaystyle{ m }[/math] points arranged in a circle, next coloring the [math]\displaystyle{ k }[/math] points red, and then coloring one of the uncolored point blue.

Clearly, [math]\displaystyle{ g(m,k)=(m-k)f(m,k) }[/math].

But we can also compute [math]\displaystyle{ g(m,k) }[/math] as follows:

  • Choose one of the [math]\displaystyle{ m }[/math] points and color it blue. This gives us [math]\displaystyle{ m }[/math] ways.
  • Cut the circle to make a line of [math]\displaystyle{ m-1 }[/math] points by removing the blue point.
  • Choose [math]\displaystyle{ k }[/math] non-consecutive points from the line of [math]\displaystyle{ m-1 }[/math] points and color them red. This gives [math]\displaystyle{ {m-k\choose k} }[/math] ways due to the previous lemma.

Thus, [math]\displaystyle{ g(m,k)=m{m-k\choose k} }[/math]. Therefore we have the desired number [math]\displaystyle{ f(m,k)=\frac{m}{m-k}{m-k\choose k} }[/math].

[math]\displaystyle{ \square }[/math]

By the above lemma, we have that [math]\displaystyle{ r_k=\frac{2n}{2n-k}{2n-k\choose k} }[/math]. Then apply the theorem of counting permutations with restricted positions,

[math]\displaystyle{ N_0=\sum_{k=0}^n(-1)^kr_k(n-k)!=\sum_{k=0}^n(-1)^k\frac{2n}{2n-k}{2n-k\choose k}(n-k)!. }[/math]

This gives the number of ways of seating the [math]\displaystyle{ n }[/math] husbands after the ladies are seated. Recall that there are [math]\displaystyle{ 2n! }[/math] ways of seating the [math]\displaystyle{ n }[/math] ladies. Thus, the total number of ways of seating [math]\displaystyle{ n }[/math] couples as required by problème des ménages is

[math]\displaystyle{ 2n!\sum_{k=0}^n(-1)^k\frac{2n}{2n-k}{2n-k\choose k}(n-k)!. }[/math]

The Euler totient function

Two integers [math]\displaystyle{ m, n }[/math] are said to be relatively prime if their greatest common diviser [math]\displaystyle{ \mathrm{gcd}(m,n)=1 }[/math]. For a positive integer [math]\displaystyle{ n }[/math], let [math]\displaystyle{ \phi(n) }[/math] be the number of positive integers from [math]\displaystyle{ \{1,2,\ldots,n\} }[/math] that are relative prime to [math]\displaystyle{ n }[/math]. This function, called the Euler [math]\displaystyle{ \phi }[/math] function or the Euler totient function, is fundamental in number theory.

We now derive a formula for this function by using the principle of inclusion-exclusion.

Theorem (The Euler totient function)

Suppose [math]\displaystyle{ n }[/math] is divisible by precisely [math]\displaystyle{ r }[/math] different primes, denoted [math]\displaystyle{ p_1,\ldots,p_r }[/math]. Then

[math]\displaystyle{ \phi(n)=n\prod_{i=1}^r\left(1-\frac{1}{p_i}\right) }[/math].
Proof.

Let [math]\displaystyle{ U=\{1,2,\ldots,n\} }[/math] be the universe. The number of positive integers from [math]\displaystyle{ U }[/math] which is divisible by some [math]\displaystyle{ p_{i_1},p_{i_2},\ldots,p_{i_s}\in\{p_1,\ldots,p_r\} }[/math], is [math]\displaystyle{ \frac{n}{p_{i_1}p_{i_2}\cdots p_{i_s}} }[/math].

[math]\displaystyle{ \phi(n) }[/math] is the number of integers from [math]\displaystyle{ U }[/math] which is not divisible by any [math]\displaystyle{ p_1,\ldots,p_r }[/math]. By principle of inclusion-exclusion,

[math]\displaystyle{ \begin{align} \phi(n) &=n+\sum_{k=1}^r(-1)^k\sum_{1\le i_1\lt i_2\lt \cdots \lt i_k\le n}\frac{n}{p_{i_1}p_{i_2}\cdots p_{i_k}}\\ &=n-\sum_{1\le i\le n}\frac{n}{p_i}+\sum_{1\le i\lt j\le n}\frac{n}{p_i p_j}-\sum_{1\le i\lt j\lt k\le n}\frac{n}{p_{i} p_{j} p_{k}}+\cdots + (-1)^r\frac{n}{p_{1}p_{2}\cdots p_{r}}\\ &=n\left(1-\sum_{1\le i\le n}\frac{1}{p_i}+\sum_{1\le i\lt j\le n}\frac{1}{p_i p_j}-\sum_{1\le i\lt j\lt k\le n}\frac{1}{p_{i} p_{j} p_{k}}+\cdots + (-1)^r\frac{1}{p_{1}p_{2}\cdots p_{r}}\right)\\ &=n\prod_{i=1}^r\left(1-\frac{1}{p_i}\right). \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

Möbius inversion

Posets

A partially ordered set or poset for short is a set [math]\displaystyle{ P }[/math] together with a binary relation denoted [math]\displaystyle{ \le_P }[/math] (or just [math]\displaystyle{ \le }[/math] if no confusion is caused), satisfying

  • (reflexivity) For all [math]\displaystyle{ x\in P, x\le x }[/math].
  • (antisymmetry) If [math]\displaystyle{ x\le y }[/math] and [math]\displaystyle{ y\le x }[/math], then [math]\displaystyle{ x=y }[/math].
  • (transitivity) If [math]\displaystyle{ x\le y }[/math] and [math]\displaystyle{ y\le z }[/math], then [math]\displaystyle{ x\le z }[/math].

We say two elements [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are comparable if [math]\displaystyle{ x\le y }[/math] or [math]\displaystyle{ y\le x }[/math]; otherwise [math]\displaystyle{ x }[/math] and [math]\displaystyle{ y }[/math] are incomparable.

Notation
  • [math]\displaystyle{ x\ge y }[/math] means [math]\displaystyle{ y\le x }[/math].
  • [math]\displaystyle{ x\lt y }[/math] means [math]\displaystyle{ x\le y }[/math] and [math]\displaystyle{ x\neq y }[/math].
  • [math]\displaystyle{ x\gt y }[/math] means [math]\displaystyle{ y\lt x }[/math].

The Möbius function

Let [math]\displaystyle{ P }[/math] be a finite poset. Consider functions in form of [math]\displaystyle{ \alpha:P\times P\rightarrow\mathbb{R} }[/math] defined over domain [math]\displaystyle{ P\times P }[/math]. It is convenient to treat such functions as matrices whose rows and columns are indexed by [math]\displaystyle{ P }[/math].

Incidence algebra of poset
Let
[math]\displaystyle{ I(P)=\{\alpha:P\times P\rightarrow\mathbb{R}\mid \alpha(x,y)=0\text{ for all }x\not\le_P y\} }[/math]
be the class of [math]\displaystyle{ \alpha }[/math] such that [math]\displaystyle{ \alpha(x,y) }[/math] is non-zero only for [math]\displaystyle{ x\le_P y }[/math].
Treating [math]\displaystyle{ \alpha }[/math] as matrix, it is trivial to see that [math]\displaystyle{ I(P) }[/math] is closed under addition and scalar multiplication, that is,
  • if [math]\displaystyle{ \alpha,\beta\in I(P) }[/math] then [math]\displaystyle{ \alpha+\beta\in I(P) }[/math];
  • if [math]\displaystyle{ \alpha\in I(P) }[/math] then [math]\displaystyle{ c\alpha\in I(P) }[/math] for any [math]\displaystyle{ c\in\mathbb{R} }[/math];
where [math]\displaystyle{ \alpha,\beta }[/math] are treated as matrices.
With this spirit, it is natural to define the matrix multiplication in [math]\displaystyle{ I(P) }[/math]. For [math]\displaystyle{ \alpha,\beta\in I(P) }[/math],
[math]\displaystyle{ (\alpha\beta)(x,y)=\sum_{z\in P}\alpha(x,z)\beta(z,y)=\sum_{x\le z\le y}\alpha(x,z)\beta(z,y) }[/math].
The second equation is due to that for [math]\displaystyle{ \alpha,\beta\in I(P) }[/math], for all [math]\displaystyle{ z }[/math] other than [math]\displaystyle{ x\le z\le y }[/math], [math]\displaystyle{ \alpha(x,z)\beta(z,y) }[/math] is zero.
By the transitivity of relation [math]\displaystyle{ \le_P }[/math], it is also easy to prove that [math]\displaystyle{ I(P) }[/math] is closed under matrix multiplication (the detailed proof is left as an exercise). Therefore, [math]\displaystyle{ I(P) }[/math] is closed under addition, scalar multiplication and matrix multiplication, so we have an algebra [math]\displaystyle{ I(P) }[/math], called incidence algebra, over functions on [math]\displaystyle{ P\times P }[/math].
Zeta function and Möbius function
A special function in [math]\displaystyle{ I(P) }[/math] is the so-called zeta function [math]\displaystyle{ \zeta }[/math], defined as
[math]\displaystyle{ \zeta(x,y)=\begin{cases}1&\text{if }x\le_P y,\\0 &\text{otherwise.}\end{cases} }[/math]
As a matrix (or more accurately, as an element of the incidence algebra), [math]\displaystyle{ \zeta }[/math] is invertible and its inversion, denoted by [math]\displaystyle{ \mu }[/math], is called the Möbius function. More precisely, [math]\displaystyle{ \mu }[/math] is also in the incidence algebra [math]\displaystyle{ I(P) }[/math], and [math]\displaystyle{ \mu\zeta=I }[/math] where [math]\displaystyle{ I }[/math] is the identity matrix (the identity of the incidence algebra [math]\displaystyle{ I(P) }[/math]).

There is an equivalent explicit definition of Möbius function.

Definition (Möbius function)
[math]\displaystyle{ \mu(x,y)=\begin{cases} -\sum_{x\le z\lt y}\mu(x,z)&\text{if }x\lt y,\\ 1&\text{if }x=y,\\ 0&\text{if }x\not\le y. \end{cases} }[/math]

To see the equivalence between this definition and the inversion of zeta function, we may have the following proposition, which is proved by directly evaluating [math]\displaystyle{ \mu\zeta }[/math].

Proposition
For any [math]\displaystyle{ x,y\in P }[/math],
[math]\displaystyle{ \sum_{x\le z\le y}\mu(x,z)=\begin{cases}1 &\text{if }x=y,\\ 0 &\text{otherwise.}\end{cases} }[/math]
Proof.

It holds that

[math]\displaystyle{ (\mu\zeta)(x,y)=\sum_{x\le z\le y}\mu(x,z)\zeta(z,y)=\sum_{x\le z\le y}\mu(x,z) }[/math].

On the other hand, [math]\displaystyle{ \mu\zeta=I }[/math], i.e.

[math]\displaystyle{ (\mu\zeta)(x,y)=\begin{cases}1 &\text{if }x=y,\\ 0 &\text{otherwise.}\end{cases} }[/math]

The proposition follows.

[math]\displaystyle{ \square }[/math]

Note that [math]\displaystyle{ \mu(x,y)=\sum_{x\le z\le y}\mu(x,z)-\sum_{x\le z\lt y}\mu(x,z) }[/math], which gives the above inductive definition of Möbius function.

Computing Möbius functions

We consider the simple poset [math]\displaystyle{ P=[n] }[/math], where [math]\displaystyle{ \le }[/math] is the total order. It follows directly from the recursive definition of Möbius function that

[math]\displaystyle{ \mu(i,j)=\begin{cases}1 & \text{if }i=j,\\ -1 & \text{if }i+1=j,\\ 0 & \text{otherwise.} \end{cases} }[/math]

Usually for general posets, it is difficult to directly compute the Möbius function from its definition. We introduce a rule helping us compute the Möbius function by decomposing the poset into posets with simple structures.

Theorem (the product rule)
Let [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math] be two finite posets, and [math]\displaystyle{ P\times Q }[/math] be the poset resulted from Cartesian product of [math]\displaystyle{ P }[/math] and [math]\displaystyle{ Q }[/math], where for all [math]\displaystyle{ (x,y), (x',y')\in P\times Q }[/math], [math]\displaystyle{ (x,y)\le (x',y') }[/math] if and only if [math]\displaystyle{ x\le x' }[/math] and [math]\displaystyle{ y\le y' }[/math]. Then
[math]\displaystyle{ \mu_{P\times Q}((x,y),(x',y'))=\mu_P(x,x')\mu_Q(y,y') }[/math].
Proof.

We use the recursive definition

[math]\displaystyle{ \mu(x,y)=\begin{cases} -\sum_{x\le z\lt y}\mu(x,z)&\text{if }x\lt y,\\ 1&\text{if }x=y,\\ 0&\text{if }x\not\le y. \end{cases} }[/math]

to prove the equation in the theorem.

If [math]\displaystyle{ (x,y)=(x',y') }[/math], then [math]\displaystyle{ x=x' }[/math] and [math]\displaystyle{ y=y' }[/math]. It is easy to see that both sides of the equation are 1. If [math]\displaystyle{ (x,y)\not\le(x',y') }[/math], then either [math]\displaystyle{ x\not\le x' }[/math] or [math]\displaystyle{ y\not\le y' }[/math]. It is also easy to see that both sides are 0.

The only remaining case is that [math]\displaystyle{ (x,y)\lt (x',y') }[/math], in which case either [math]\displaystyle{ x\lt x' }[/math] or [math]\displaystyle{ y\lt y' }[/math].

[math]\displaystyle{ \begin{align} \sum_{(x,y)\le (u,v)\le (x',y')}\mu_P(x,u)\mu_Q(y,v) &=\left(\sum_{x\le u\le x'}\mu_P(x,u)\right)\left(\sum_{y\le v\le y'}\mu_Q(y,v)\right)=I(x,x')I(y,y')=0, \end{align} }[/math]

where the last two equations are due to the proposition for [math]\displaystyle{ \mu }[/math]. Thus

[math]\displaystyle{ \mu_P(x,x')\mu_Q(y,y')=-\sum_{(x,y)\le (u,v)\lt (x',y')}\mu_P(x,u)\mu_Q(y,v) }[/math].

By induction, assume that the equation [math]\displaystyle{ \mu_{P\times Q}((x,y),(u,v))=\mu_P(x,u)\mu_Q(y,v) }[/math] is true for all [math]\displaystyle{ (u,v)\lt (x',y') }[/math]. Then

[math]\displaystyle{ \begin{align} \mu_{P\times Q}((x,y),(x',y')) &=-\sum_{(x,y)\le (u,v)\lt (x',y')}\mu_{P\times Q}((x,y),(u,v))\\ &=-\sum_{(x,y)\le (u,v)\lt (x',y')}\mu_P(x,u)\mu_Q(y,v)\\ &=\mu_P(x,x')\mu_Q(y,y'), \end{align} }[/math]

which complete the proof.

[math]\displaystyle{ \square }[/math]
Poset of subsets
Consider the poset defined by all subsets of a finite universe [math]\displaystyle{ U }[/math], that is [math]\displaystyle{ P=2^U }[/math], and for [math]\displaystyle{ S,T\subseteq U }[/math], [math]\displaystyle{ S\le_P T }[/math] if and only if [math]\displaystyle{ S\subseteq T }[/math].
Möbius function for subsets
The Möbius function for the above defined poset [math]\displaystyle{ P }[/math] is that for [math]\displaystyle{ S,T\subseteq U }[/math],
[math]\displaystyle{ \mu(S,T)= \begin{cases} (-1)^{|T|-|S|} & \text{if }S\subseteq T,\\ 0 &\text{otherwise.} \end{cases} }[/math]
Proof.

We can equivalently represent each [math]\displaystyle{ S\subseteq U }[/math] by a boolean string [math]\displaystyle{ S\in\{0,1\}^U }[/math], where [math]\displaystyle{ S(x)=1 }[/math] if and only if [math]\displaystyle{ x\in S }[/math].

For each element [math]\displaystyle{ x\in U }[/math], we can define a poset [math]\displaystyle{ P_x=\{0, 1\} }[/math] with [math]\displaystyle{ 0\le 1 }[/math]. By definition of Möbius function, the Möbius function of this elementary poset is given by [math]\displaystyle{ \mu_x(0,0)=\mu_x(1,1)=1 }[/math], [math]\displaystyle{ \mu_x(0,1)=-1 }[/math] and [math]\displaystyle{ \mu(1,0)=0 }[/math].

The poset [math]\displaystyle{ P }[/math] of all subsets of [math]\displaystyle{ U }[/math] is the Cartesian product of all [math]\displaystyle{ P_x }[/math], [math]\displaystyle{ x\in U }[/math]. By the product rule,

[math]\displaystyle{ \mu(S,T)=\prod_{x\in U}\mu_x(S(x), T(x))=\prod_{x\in S\atop x\in T}1\prod_{x\not\in S\atop x\not\in T}1\prod_{x\in S\atop x\not\in T}0\prod_{x\not\in S\atop x\in T}(-1)=\begin{cases} (-1)^{|T|-|S|} & \text{if }S\subseteq T,\\ 0 &\text{otherwise.} \end{cases} }[/math]
[math]\displaystyle{ \square }[/math]
Note that the poset [math]\displaystyle{ P }[/math] is actually the Boolean algebra of rank [math]\displaystyle{ |U| }[/math]. The proof relies only on that the fact that the poset is a Boolean algebra, thus the theorem holds for Boolean algebra posets.
Posets of divisors
Consider the poset defined by all devisors of a positive integer [math]\displaystyle{ n }[/math], that is [math]\displaystyle{ P=\{a\gt 0\mid a|n\} }[/math], and for [math]\displaystyle{ a,b\in P }[/math], [math]\displaystyle{ a\le_P b }[/math] if and only if [math]\displaystyle{ a|b\, }[/math].
Möbius function for divisors
The Möbius function for the above defined poset [math]\displaystyle{ P }[/math] is that for [math]\displaystyle{ a,b\gt 0 }[/math] that [math]\displaystyle{ a|n }[/math] and [math]\displaystyle{ b|n }[/math],
[math]\displaystyle{ \mu(a,b)= \begin{cases} (-1)^{r} & \text{if }\frac{b}{a}\text{ is the product of }r\text{ distinct primes},\\ 0 &\text{otherwise, i.e. if }a\not|b\text{ or }\frac{b}{a}\text{ is not squarefree.} \end{cases} }[/math]
Proof.

Denote [math]\displaystyle{ n=p_1^{n_1}p_2^{n_2}\cdots p_k^{n_k} }[/math]. Represent [math]\displaystyle{ n }[/math] by a tuple [math]\displaystyle{ (n_1,n_2,\ldots,n_k) }[/math]. Every [math]\displaystyle{ a\in P }[/math] corresponds in this way to a tuple [math]\displaystyle{ (a_1,a_2,\ldots,a_k) }[/math] with [math]\displaystyle{ a_i\le n_i }[/math] for all [math]\displaystyle{ 1\le i\le k }[/math].

Let [math]\displaystyle{ P_i=\{1,2,\ldots,n_i\} }[/math] be the poset with [math]\displaystyle{ \le }[/math] being the total order. The poset [math]\displaystyle{ P }[/math] of divisors of [math]\displaystyle{ n }[/math] is thus isomorphic to the poset constructed by the Cartesian product of all [math]\displaystyle{ P_i }[/math], [math]\displaystyle{ 1\le i\le k }[/math]. Then

[math]\displaystyle{ \begin{align} \mu(a,b) &=\prod_{1\le i\le k}\mu(a_i,b_i)=\prod_{1\le i\le k\atop a_i=b_i}1\prod_{1\le i\le k\atop b_i-a_i=1}(-1)\prod_{1\le i\le k\atop b_i-a_i\not\in\{0,1\}}0 =\begin{cases} (-1)^{\sum_{i}(b_i-a_i)} & \text{if all }b_i-a_i\in\{0,1\},\\ 0 &\text{otherwise.} \end{cases}\\ &=\begin{cases} (-1)^{r} & \text{if }\frac{b}{a}\text{ is the product of }r\text{ distinct primes},\\ 0 &\text{otherwise.} \end{cases} \end{align} }[/math]
[math]\displaystyle{ \square }[/math]

Principle of Möbius inversion

We now introduce the the famous Möbius inversion formula.

Möbius inversion formula
Let [math]\displaystyle{ P }[/math] be a finite poset and [math]\displaystyle{ \mu }[/math] its Möbius function. Let [math]\displaystyle{ f,g:P\rightarrow \mathbb{R} }[/math]. Then
[math]\displaystyle{ \forall x\in P,\,\, g(x)=\sum_{y\le x} f(y) }[/math],
if and only if
[math]\displaystyle{ \forall x\in P,\,\, f(x)=\sum_{y\le x}g(y)\mu(y,x) }[/math].

The functions [math]\displaystyle{ f,g:P\rightarrow\mathbb{R} }[/math] are vectors. Evaluate the matrix multiplications [math]\displaystyle{ f\zeta }[/math] and [math]\displaystyle{ g\mu }[/math] as follows:

[math]\displaystyle{ (f\zeta)(x)=\sum_{y\in P}f(y)\zeta(y,x)=\sum_{y\le x}f(y) }[/math],

and

[math]\displaystyle{ (g\mu)(x)=\sum_{y\in P}g(y)\mu(y,x)=\sum_{y\le x}g(y)\mu(y,x) }[/math].

The Möbius inversion formula is nothing but the following statement

[math]\displaystyle{ f\zeta=g\Leftrightarrow f=g\mu }[/math],

which is trivially true due to [math]\displaystyle{ \mu\zeta=I }[/math] by basic linear algebra.

The following dual form of the inversion formula is also useful.

Möbius inversion formula, dual form
Let [math]\displaystyle{ P }[/math] be a finite poset and [math]\displaystyle{ \mu }[/math] its Möbius function. Let [math]\displaystyle{ f,g:P\rightarrow \mathbb{R} }[/math]. Then
[math]\displaystyle{ \forall x\in P, \,\, g(x)=\sum_{y{\color{red}\ge} x} f(y) }[/math],
if and only if
[math]\displaystyle{ \forall x\in P, \,\, f(x)=\sum_{y{\color{red}\ge} x}\mu(x,y)g(y) }[/math].

To prove the dual form, we only need to evaluate the matrix multiplications on left:

[math]\displaystyle{ \zeta f=g\Leftrightarrow f=\mu g }[/math].
Principle of Inclusion-Exclusion
Let [math]\displaystyle{ A_1,A_2,\ldots,A_n\subseteq U }[/math]. For any [math]\displaystyle{ J\subseteq\{1,2,\ldots,n\} }[/math],
  • let [math]\displaystyle{ f(J) }[/math] be the number of elements that belongs to exactly the sets [math]\displaystyle{ A_i, i\in J }[/math] and to no others, i.e.
[math]\displaystyle{ f(J)=\left|\left(\bigcap_{i\in J}A_i\right)\setminus\left(\bigcup_{i\not\in J}A_i\right)\right| }[/math];
  • let [math]\displaystyle{ g(J)=\left|\bigcap_{i\in J}A_i\right| }[/math].
For any [math]\displaystyle{ J\subseteq\{1,2,\ldots,n\} }[/math], the following relation holds for the above defined [math]\displaystyle{ f }[/math] and [math]\displaystyle{ g }[/math]:
[math]\displaystyle{ g(J)=\sum_{I\supseteq J}f(I) }[/math].
Applying the dual form of the Möbius inversion formula, we have that for any [math]\displaystyle{ J\subseteq\{1,2,\ldots,n\} }[/math],
[math]\displaystyle{ f(J)=\sum_{I\supseteq J}\mu(J,I)g(I)=\sum_{I\supseteq J}\mu(J,I)\left|\bigcap_{i\in I}A_i\right| }[/math],
where the Möbius function is for the poset of all subsets of [math]\displaystyle{ \{1,2,\ldots,n\} }[/math], ordered by [math]\displaystyle{ \subseteq }[/math], thus it holds that [math]\displaystyle{ \mu(J,I)=(-1)^{|I|-|J|}\, }[/math] for [math]\displaystyle{ J\subseteq I }[/math]. Therefore,
[math]\displaystyle{ f(J)=\sum_{I\supseteq J}(-1)^{|I|-|J|}\left|\bigcap_{i\in I}A_i\right| }[/math].
We have a formula for the number of elements with exactly those properties [math]\displaystyle{ A_i, i\in J }[/math] for any [math]\displaystyle{ J\subseteq\{1,2,\ldots,n\} }[/math]. For the special case that [math]\displaystyle{ J=\emptyset }[/math], [math]\displaystyle{ f(\emptyset) }[/math] is the number of elements satisfying no property of [math]\displaystyle{ A_1,A_2,\ldots,A_n }[/math], and
[math]\displaystyle{ f(\emptyset)=\left|U\setminus\bigcup_iA_i\right|=\sum_{I\subseteq \{1,\ldots,n\}}(-1)^{|I|}\left|\bigcap_{i\in I}A_i\right| }[/math]
which gives precisely the Principle of Inclusion-Exclusion.
Möbius inversion formula for number theory
The number-theoretical Möbius inversion formula is stated as such: Let [math]\displaystyle{ N }[/math] be a positive integer,
[math]\displaystyle{ g(n)=\sum_{d|n}f(d)\, }[/math] for all [math]\displaystyle{ n|N }[/math]
if and only if
[math]\displaystyle{ f(n)=\sum_{d|n}g(d)\mu\left(\frac{n}{d}\right)\, }[/math] for all [math]\displaystyle{ n|N }[/math],
where [math]\displaystyle{ \mu }[/math] is the number-theoretical Möbius function, defined as
[math]\displaystyle{ \mu(n)=\begin{cases}1 & \text{if }n\text{ is product of an even number of distinct primes,}\\ -1 &\text{if }n\text{ is product of an odd number of distinct primes,}\\ 0 &\text{otherwise.}\end{cases} }[/math]
The number-theoretical Möbius inversion formula is just a special case of the Möbius inversion formula for posets, when the poset is the set of divisors of [math]\displaystyle{ N }[/math], and for any [math]\displaystyle{ a,b\in P }[/math], [math]\displaystyle{ a\le_P b }[/math] if [math]\displaystyle{ a|b }[/math].

Reference

  • Stanley, Enumerative Combinatorics, Volume 1, Chapter 2.
  • van Lin and Wilson, A course in combinatorics, Chapter 10, 25.