随机算法 (Fall 2011)/Markov Chains

From TCS Wiki
Revision as of 05:41, 19 July 2011 by imported>WikiSysop (Created page with '= The Markov property = A '''stochastic processes''' <math>\{X_t\mid t\in T\}</math> is a collection of random variables. The index <math>t</math> is often called ''time'', as th…')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The Markov property

A stochastic processes [math]\displaystyle{ \{X_t\mid t\in T\} }[/math] is a collection of random variables. The index [math]\displaystyle{ t }[/math] is often called time, as the process represents the value of a random variable changing over time. Let [math]\displaystyle{ \mathcal{S} }[/math] be the set of values assumed by the random variables [math]\displaystyle{ X_t }[/math]. We call each element of [math]\displaystyle{ \mathcal{S} }[/math] a state, as [math]\displaystyle{ X_t }[/math] represents the state of the process at time [math]\displaystyle{ t }[/math].

The model of stochastic processes can be very general. In this class, we only consider the stochastic processes with the following properties:

discrete time
The index set [math]\displaystyle{ T }[/math] is countable. Specifically, we assume that [math]\displaystyle{ T=\{0,1,2,\ldots\} }[/math] and the process is [math]\displaystyle{ X_0,X_1,X_2,\ldots }[/math]
discrete space
The state space [math]\displaystyle{ \mathcal{S} }[/math] is countable. We are especially interested in the case that [math]\displaystyle{ \mathcal{S} }[/math] is finite, in which case the process is called a finite process.

The next property is about the dependency structure among random variables. The simplest dependency structure for [math]\displaystyle{ X_0,X_1,\ldots }[/math] is no dependency at all, that is, independence. We consider the next simplest dependency structure called the Markov property.

Definition (the Markov property)
A process [math]\displaystyle{ X_0,X_1,\ldots }[/math] satisfies the Markov property if
[math]\displaystyle{ \Pr[X_{n+1}=x_{n+1}\mid X_{0}=x_{0}, X_{1}=x_{1},\ldots,X_{n}=x_{n}]=\Pr[X_{n+1}=x_{n+1}\mid X_{n}=x_{n}] }[/math]
for all [math]\displaystyle{ n }[/math] and all [math]\displaystyle{ x_0,\ldots,x_{n+1}\in \mathcal{S} }[/math].

Informally, the Markov property means: "conditioning on the present, the future does not depend on the past." Hence, the Markov property is also called the memoryless property.

A stochastic process [math]\displaystyle{ X_0,X_1,\ldots }[/math] of discrete time and discrete space is a Markov chain if it has the Markov property.

Transition matrix

Let [math]\displaystyle{ P^{(t+1)}_{i,j}=\Pr[X_{t+1}=j\mid X_t=i] }[/math]. For a Markov chain with a finite state space [math]\displaystyle{ \mathcal{S}=[N] }[/math]. This gives us a transition matrix [math]\displaystyle{ P^{(t+1)} }[/math] at time [math]\displaystyle{ t }[/math]. The transition matrix is an [math]\displaystyle{ N\times N }[/math] matrix of nonnegative entries such that the sum over each row of [math]\displaystyle{ P^{(t)} }[/math] is 1, since

[math]\displaystyle{ \begin{align}\sum_{j}P^{(t+1)}_{i,j}=\sum_{j}\Pr[X_{t+1}=j\mid X_t=i]=1\end{align} }[/math].

In linear algebra, matrices of this type are called stochastic matrices.

Let [math]\displaystyle{ \pi^{(t)} }[/math] be the distribution of the chain at time [math]\displaystyle{ t }[/math], that is, [math]\displaystyle{ \begin{align}\pi^{(t)}_i=\Pr[X_t=i]\end{align} }[/math]. For a finite chain, [math]\displaystyle{ \pi^{(t)} }[/math] is a vector of [math]\displaystyle{ N }[/math] nonnegative entries such that [math]\displaystyle{ \begin{align}\sum_{i}\pi^{(t)}_i=1\end{align} }[/math]. In linear algebra, vectors of this type are called stochastic vectors. Then, it holds that

[math]\displaystyle{ \begin{align}\pi^{(t+1)}=\pi^{(t)}P^{(t+1)}\end{align} }[/math].

To see this, we apply the law of total probability,

[math]\displaystyle{ \begin{align} \pi^{(t+1)}_j &= \Pr[X_{t+1}=j]\\ &= \sum_{i}\Pr[X_{t+1}=j\mid X_t=i]\Pr[X_t=i]\\ &=\sum_{i}\pi^{(t)}_iP^{(t+1)}_{i,j}\\ &=(\pi^{(t)}P^{(t+1)})_j. \end{align} }[/math]

Therefore, a finite Markov chain [math]\displaystyle{ X_0,X_1,\ldots }[/math] is specified by an initial distribution [math]\displaystyle{ \pi^{(0)} }[/math] and a sequence of transition matrices [math]\displaystyle{ P^{(1)},P^{(2)},\ldots }[/math]. And the transitions of chain can be described by a series of matrix products:

[math]\displaystyle{ \pi^{(0)}\stackrel{P^{(1)}}{\longrightarrow}\pi^{(1)}\stackrel{P^{(2)}}{\longrightarrow}\pi^{(2)}\stackrel{P^{(3)}}{\longrightarrow}\cdots\cdots\pi^{(t)}\stackrel{P^{(t+1)}}{\longrightarrow}\pi^{(t+1)}\cdots }[/math]

A Markov chain is said to be homogenous if the transitions depend only on the current states but not on the time, that is

[math]\displaystyle{ P^{(t)}_{i,j}=P_{i,j} }[/math] for all [math]\displaystyle{ t }[/math].

The transitions of a homogenous Markov chain is given by a single matrix [math]\displaystyle{ P }[/math]. Suppose that [math]\displaystyle{ \pi^{(0)} }[/math] is the initial distribution. At each time [math]\displaystyle{ t }[/math],

[math]\displaystyle{ \begin{align}\pi^{(t+1)}=\pi^{(t)}P\end{align} }[/math].

Expanding this recursion, we have

[math]\displaystyle{ \begin{align}\pi^{(n)}=\pi^{(0)}P^n\end{align} }[/math].

From now on, we restrict ourselves to the homogenous Markov chains, and the term "Markov chain" means "homogenous Markov chian" unless stated otherwise.

Definition (finite Markov chain)
Let [math]\displaystyle{ P }[/math] be an [math]\displaystyle{ N\times N }[/math] stochastic matrix. A process [math]\displaystyle{ X_0,X_1,\ldots }[/math] with finite space [math]\displaystyle{ \mathcal{S}=[N] }[/math] is said to be a (homogenous) Markov chain with transition matrix [math]\displaystyle{ P }[/math], if for all [math]\displaystyle{ n\ge0, }[/math] all [math]\displaystyle{ i,j\in[N] }[/math] and all [math]\displaystyle{ i_0,\ldots,i_{n-1}\in[N] }[/math] we have
[math]\displaystyle{ \begin{align} \Pr[X_{n+1}=j\mid X_0=i_0,\ldots,X_{n-1}=i_{n-1},X_n=i] &=Pr[X_{n+1}=j\mid X_n=i]\\ &=P_{i,j}. \end{align} }[/math]

To describe a Markov chain, we only need to specify:

  • initial distribution [math]\displaystyle{ \pi^{(0)} }[/math];
  • transition matrix [math]\displaystyle{ P }[/math].

Then the transitions can be simulated by matrix products:

[math]\displaystyle{ \pi^{(0)}\stackrel{P}{\longrightarrow}\pi^{(1)}\stackrel{P}{\longrightarrow}\pi^{(2)}\stackrel{P}{\longrightarrow}\cdots\cdots\pi^{(t)}\stackrel{P}{\longrightarrow}\pi^{(t+1)}\stackrel{P}{\longrightarrow}\cdots }[/math]

The distribution of the chain at time [math]\displaystyle{ n }[/math] can be computed by [math]\displaystyle{ \pi^{(n)}=\pi^{(0)}P^n }[/math].

Transition graph

Another way to picture a Markov chain is by its transition graph. A weighted directed graph [math]\displaystyle{ G(V,E,w) }[/math] is said to be a transition graph of a finite Markov chain with transition matrix [math]\displaystyle{ P }[/math] if:

  • [math]\displaystyle{ V=\mathcal{S} }[/math], i.e. each node of the transition graph corresponds to a state of the Markov chain;
  • for any [math]\displaystyle{ i,j\in V }[/math], [math]\displaystyle{ (i,j)\in E }[/math] if and only if [math]\displaystyle{ P_{i,j}\gt 0 }[/math], and the weight [math]\displaystyle{ w(i,j)=P_{i,j} }[/math].

A transition graph defines a natural random walk: at each time step, at the current node, the walk moves through an adjacent edge with the probability of the weight of the edge. It is easy to see that this is a well-defined random walk, since [math]\displaystyle{ \begin{align}\sum_j P_{i,j}=1\end{align} }[/math] for every [math]\displaystyle{ i }[/math]. Therefore, a Markov chain is equivalent to a random walk, so these two terms are often used interchangeably.

Stationary distributions

Suppose [math]\displaystyle{ \pi }[/math] is a distribution over the state space [math]\displaystyle{ \mathcal{S} }[/math] such that, if the Markov chain starts with initial distribution [math]\displaystyle{ \pi^{(0)}=\pi }[/math], then after a transition, the distribution of the chain is still [math]\displaystyle{ \pi^{(1)}=\pi }[/math]. Then the chain will stay in the distribution [math]\displaystyle{ \pi }[/math] forever:

[math]\displaystyle{ \pi\stackrel{P}{\longrightarrow}\pi\stackrel{P}{\longrightarrow}\pi\stackrel{P}{\longrightarrow}\cdots\cdots }[/math]

Such [math]\displaystyle{ \pi }[/math] is called a stationary distribution.

Definition (stationary distribution)
A stationary distribution of a finite Markov chain with transition matrix [math]\displaystyle{ P }[/math] is a probability distribution [math]\displaystyle{ \pi }[/math] such that
[math]\displaystyle{ \begin{align}\pi P=\pi\end{align} }[/math].
Example
An [math]\displaystyle{ N\times N }[/math] matrix is called double stochastic if every row sums to 1 and every column sums to 1. If the transition matrix [math]\displaystyle{ P }[/math] of the chain is double stochastic, the uniform distribution [math]\displaystyle{ \pi_i=1/N }[/math] for all [math]\displaystyle{ i }[/math], is a stationary distribution. (Check by yourself.)
If the transition matrix [math]\displaystyle{ P }[/math] is symmetric, the uniform distribution is a stationary distribution. This is because a symmetric stochastic matrix is double stochastic. (Check by yourself.)

Every finite Markov chain has a stationary distribution. This is a consequence of Perron's theorem in linear algebra.

For some Markov chains, no matter what the initial distribution is, after running the chain for a while, the distribution of the chain approaches the stationary distribution. For example, consider the transition matrix:

[math]\displaystyle{ P=\begin{bmatrix} 0 & 1 & 0\\ \frac{1}{3} & 0 & \frac{2}{3}\\ \frac{1}{3} & \frac{1}{3} & \frac{1}{3} \end{bmatrix}. }[/math]

Run the chain for a while, we have:

[math]\displaystyle{ P^5\approx\begin{bmatrix} 0.2469 & 0.4074 & 0.3457\\ 0.2510 & 0.3621 & 0.3868\\ 0.2510 & 0.3663 & 0.3827 \end{bmatrix}, P^{10}\approx\begin{bmatrix} 0.2500 & 0.3747 & 0.3752\\ 0.2500 & 0.3751 & 0.3749\\ 0.2500 & 0.3751 & 0.3749 \end{bmatrix}, P^{20}\approx\begin{bmatrix} 0.2500 & 0.3750 & 0.3750\\ 0.2500 & 0.3750 & 0.3750\\ 0.2500 & 0.3750 & 0.3750 \end{bmatrix}. }[/math]

Therefore, no matter what the initial distribution [math]\displaystyle{ \pi^{(0)} }[/math] is, after 20 steps, [math]\displaystyle{ \pi^{(0)}P^{20} }[/math] is very close to the distribution [math]\displaystyle{ (0.25,0.375,0.375) }[/math], which is a stationary distribution for [math]\displaystyle{ P }[/math]. So the Markov chain converges to the same stationary distribution no matter what the initial distribution is.

However, this is not always true. For example, for the Markov chain with the following transition matrix:

[math]\displaystyle{ P=\begin{bmatrix} \frac{1}{2} & \frac{1}{2} & 0 & 0\\ \frac{1}{3} & \frac{2}{3} & 0 & 0 \\ 0 & 0 & \frac{3}{4} & \frac{1}{4}\\ 0 & 0 & \frac{1}{4} & \frac{3}{4} \end{bmatrix}. }[/math]

And

[math]\displaystyle{ P^{20}\approx \begin{bmatrix} 0.4 & 0.6 & 0 & 0\\ 0.4 & 0.6 & 0 & 0\\ 0 & 0 & 0.5 & 0.5\\ 0 & 0 & 0.5 & 0.5 \end{bmatrix}. }[/math]

So the chain will converge, but not to the same stationary distribution. Depending on the initial distribution, the chain could converge to any distribution which is a linear combination of [math]\displaystyle{ (0.4, 0.6, 0, 0) }[/math] and [math]\displaystyle{ (0, 0, 0.5, 0.5) }[/math]. We observe that this is because the original chain [math]\displaystyle{ P }[/math] can be broken into two disjoint Markov chains, which have their own stationary distributions. We say that the chain is reducible.

Another example is as follows:

[math]\displaystyle{ P=\begin{bmatrix} 0 & 1\\ 1& 0 \end{bmatrix}. }[/math]

The chain oscillates between the two states. Then

[math]\displaystyle{ P^t=\begin{bmatrix} 0 & 1\\ 1& 0 \end{bmatrix} }[/math] for any odd [math]\displaystyle{ t }[/math], and
[math]\displaystyle{ P^t=\begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix} }[/math] for any even [math]\displaystyle{ t }[/math].

So the chain does not converge. We say that the chain is periodic.

We will see that for finite Markov chains, being reducible and being periodic are the only two possible cases that a Markov chain does not converge to a unique stationary distribution.

Irreducibility and aperiodicity

Definition (irreducibility)
State [math]\displaystyle{ j }[/math] is accessible from state [math]\displaystyle{ i }[/math] if it is possible for the chain to visit state [math]\displaystyle{ j }[/math] if the chain starts in state [math]\displaystyle{ i }[/math], or, in other words,
[math]\displaystyle{ \begin{align}P^n(i,j)\gt 0\end{align} }[/math]
for some integer [math]\displaystyle{ n\ge 0 }[/math]. State [math]\displaystyle{ i }[/math] communicates with state [math]\displaystyle{ j }[/math] if [math]\displaystyle{ j }[/math] is accessible from [math]\displaystyle{ i }[/math] and [math]\displaystyle{ i }[/math] is accessible from [math]\displaystyle{ j }[/math].
We say that the Markov chain is irreducible if all pairs of states communicate.

It is more clear to interprete these concepts in terms of transition graphs:

  • [math]\displaystyle{ j }[/math] is accessible from [math]\displaystyle{ i }[/math] means that [math]\displaystyle{ j }[/math] is connected from [math]\displaystyle{ i }[/math] in the transition graph, i.e. there is a directed path from [math]\displaystyle{ i }[/math] to [math]\displaystyle{ j }[/math].
  • [math]\displaystyle{ i }[/math] communicates with [math]\displaystyle{ j }[/math] means that [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] are strongly connected in the transition graph.
  • A finite Markov chain is irreducible if and only if its transition graph is strongly connected.

It is easy to see that communicating is an equivalence relation. That is, it is reflexive, symmetric, and transitive. Thus, the communicating relation partition the state space into disjoint equivalence classes, called communicating classes. For a finite Markov chain, communicating classes correspond to the strongly connected components in the transition graph. It is possible for the chain to move from one communicating class to another, but in that case it is impossible to return to the original class.


Definition (aperiodicity)
The period of a state [math]\displaystyle{ i }[/math] is the greatest common divisor (gcd)
[math]\displaystyle{ \begin{align}d_i=\gcd\{n\mid (P^n)_{i,i}\gt 0\}\end{align} }[/math].
A state is aperiodic if its period is 1. A Markov chain is aperiodic if all its states are aperiodic.

For example, suppose that the period of state [math]\displaystyle{ i }[/math] is [math]\displaystyle{ d_i=3 }[/math]. Then, starting from state [math]\displaystyle{ i }[/math],

[math]\displaystyle{ i,\bigcirc,\bigcirc,\square,\bigcirc,\bigcirc,\square,\bigcirc,\bigcirc,\square,\bigcirc,\bigcirc,\square,\cdots\cdots }[/math]

only the squares are possible to be [math]\displaystyle{ i }[/math].

In the transition graph of a finite Markov chain, [math]\displaystyle{ (P^n)_{i,i}\gt 0 }[/math] is equivalent to that [math]\displaystyle{ i }[/math] is on a cycle of length [math]\displaystyle{ n }[/math]. Period of a state [math]\displaystyle{ i }[/math] is the greatest common devisor of the lengths of cycles passing [math]\displaystyle{ i }[/math].

The next theorem shows that period is in fact a class property.

Theorem
If the states [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math] communicate, then [math]\displaystyle{ d_i=d_j }[/math].
Proof.
For communicating [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math], there is a path [math]\displaystyle{ P_1 }[/math] from [math]\displaystyle{ i }[/math] to [math]\displaystyle{ j }[/math] of length [math]\displaystyle{ n_1 }[/math], and there is a path [math]\displaystyle{ P_2 }[/math] from [math]\displaystyle{ j }[/math] to [math]\displaystyle{ i }[/math] of length [math]\displaystyle{ n_2 }[/math]. Then [math]\displaystyle{ P_1P_2 }[/math] gives a cycle starting at [math]\displaystyle{ i }[/math] of length [math]\displaystyle{ n_1+n+2 }[/math], and

for any cycle [math]\displaystyle{ C }[/math] starting at [math]\displaystyle{ j }[/math] of length [math]\displaystyle{ n }[/math], [math]\displaystyle{ P_1CP_2 }[/math] gives a cycle starting at [math]\displaystyle{ i }[/math] of length [math]\displaystyle{ n_1+n_2+n }[/math]. Since the period of [math]\displaystyle{ i }[/math] is [math]\displaystyle{ d_i }[/math], then both [math]\displaystyle{ (n_1+n_2) }[/math] and [math]\displaystyle{ (n_1+n_2+n) }[/math] are devisable by [math]\displaystyle{ d_i }[/math]. Subtracting the two, [math]\displaystyle{ n }[/math] is devisable by [math]\displaystyle{ d_i }[/math]. Note that this holds for arbitrary cycle [math]\displaystyle{ C }[/math] starting at [math]\displaystyle{ j }[/math], then [math]\displaystyle{ d_i }[/math] is the common divisor of all such [math]\displaystyle{ n }[/math] that [math]\displaystyle{ P^n_{j,j}\gt 0 }[/math]. Since [math]\displaystyle{ d_j }[/math] is defined to be the greatest common divisor of the same set of [math]\displaystyle{ n }[/math], it holds that [math]\displaystyle{ d_j\ge d_i }[/math]. Interchanging the role of [math]\displaystyle{ i }[/math] and [math]\displaystyle{ j }[/math], we can show that [math]\displaystyle{ d_i\ge d_j }[/math]. Therefore [math]\displaystyle{ d_i=d_j }[/math].

[math]\displaystyle{ \square }[/math]

Due to the above theorem, an irreducible Markov chain is aperiodic if one of the states is aperiodic.

The Markov chain convergence theorem

Theorem (Markov chain convergence theorem)
Let [math]\displaystyle{ X_0,X_1,\ldots, }[/math] be an irreducible aperiodic Markov chain with finite state space [math]\displaystyle{ \mathcal{S}=[N] }[/math], transition matrix [math]\displaystyle{ P }[/math], and arbitrary initial distribution [math]\displaystyle{ \pi^{(0)} }[/math]. Then, there exists a stationary distribution [math]\displaystyle{ \pi }[/math] such that [math]\displaystyle{ \pi P=\pi }[/math], and
[math]\displaystyle{ \lim_{n\rightarrow\infty}\left(\pi^{(0)}P^n\right)_i=\pi_i }[/math]
for all states [math]\displaystyle{ i\in\mathcal{S} }[/math].

The theorem says that if we run an irreducible aperiodic finite Markov chain for a sufficient long time [math]\displaystyle{ n }[/math], then, regardless of what the initial distribution was, the distribution at time [math]\displaystyle{ n }[/math] will be close to the stationary distribution [math]\displaystyle{ \pi }[/math].

Three pieces of information are delivered by the theorem regarding the stationary distribution:

  • Existence: there exists a stationary distribution.
  • Uniqueness: the stationary distribution is unique.
  • Convergence: starting from any initial distribution, the chain converges to the stationary distribution.

First, for the existence of stationary distribution, neither irreducibility nor aperiodicity is necessary for the existence of a stationary distribution. In fact, any finite Markov chain has a stationary distribution. Irreducibility and aperiodicity guarantee the uniqueness and convergence behavior of the stationary distribution.

  • For a reducible chain, there could be more than one stationary distributions. We have seen such examples. Note that there do exist reducible Markov chains with just one stationary distribution. For example, the chain
[math]\displaystyle{ P=\begin{bmatrix} 1/2 & 1/2\\ 0 & 1\\ \end{bmatrix} }[/math]
is reducible, but only has one stationary distribution [math]\displaystyle{ (0,1) }[/math], because the transition graph is still weakly connected.
  • For a periodic chain, the stationary probability [math]\displaystyle{ \pi_i }[/math] of state [math]\displaystyle{ i }[/math] is not the limiting probability of being in state [math]\displaystyle{ i }[/math] but instead just the long-term frequency of visiting state [math]\displaystyle{ i }[/math].

Second, the theorem itself only guarantees the existence and the convergence. The uniqueness is a consequence of these two. To see this, suppose that there exists a [math]\displaystyle{ \pi' }[/math] such that [math]\displaystyle{ \pi'P=\pi' }[/math]. Starting from the distribution [math]\displaystyle{ \pi' }[/math], due to the theorem, [math]\displaystyle{ \lim_{n\rightarrow\infty}\pi'P^{n}=\pi }[/math]. Since [math]\displaystyle{ \pi'P^n=\pi' }[/math] for any [math]\displaystyle{ n }[/math], it holds that [math]\displaystyle{ \lim_{n\rightarrow\infty}\pi'=\pi }[/math]. But [math]\displaystyle{ \pi' }[/math] does not depend on [math]\displaystyle{ n }[/math], hence [math]\displaystyle{ \pi'=\pi }[/math].

Coupling

The convergence theorem is proved by coupling, which is an important idea in probabilistic argument, and is a powerful tool for the analysis of Markov chains.

To illustrate the idea of coupling, we consider an example, which has nothing to do with Markov chain:

Example: connectivity of random graphs
Consider two distributions of random graphs, [math]\displaystyle{ G(n,\frac{1}{2}) }[/math] and [math]\displaystyle{ G(n,\frac{1}{3}) }[/math]. We have seen this notation before: a [math]\displaystyle{ G(n,p) }[/math] means that with probability [math]\displaystyle{ p }[/math], an edge is drawn independently between any pair of the [math]\displaystyle{ n }[/math] vertices. We want to show that
[math]\displaystyle{ \Pr\left[G(n,\frac{1}{2})\mbox{ is connected }\right]\ge\Pr\left[G(n,\frac{1}{3})\mbox{ is connected }\right] }[/math].
It seems obvious to us: expectedly [math]\displaystyle{ G(n,\frac{1}{2}) }[/math] will have more edges than [math]\displaystyle{ G(n,\frac{1}{3}) }[/math], so it should have more chance to be connected. However, formally proving this is not so easy. If we can compute the probability that a [math]\displaystyle{ G(n,p) }[/math] is connected, then we can compare the probabilities, but computing the probability that a random graph is connected is a very non-trivial task.
We then show that with coupling, we can compare two distributions without actually computing the probabilities.
Suppose that [math]\displaystyle{ G(n,p) }[/math] is generated as follows, for the [math]\displaystyle{ i }[/math]th pair of vertices, where [math]\displaystyle{ i=1,\ldots,{n\choose 2} }[/math], let [math]\displaystyle{ U_i }[/math] be a uniform and independent random variable ranging over [math]\displaystyle{ [0,1] }[/math], and an edge is drawn between the pair of vertices if [math]\displaystyle{ U_i\le p }[/math]. Note that this is exactly the same as the definition of [math]\displaystyle{ G(n,p) }[/math]. So a random graph [math]\displaystyle{ G(n,p) }[/math] is generated from a sequence of random sources [math]\displaystyle{ U_1,U_2,\ldots,U_{n\choose 2} }[/math].
Our trick is to use the same sequence of [math]\displaystyle{ U_1,U_2,\ldots,U_{n\choose 2} }[/math] to generate both [math]\displaystyle{ G(n,\frac{1}{2}) }[/math] and [math]\displaystyle{ G(n,\frac{1}{3}) }[/math]. Note that both [math]\displaystyle{ G(n,\frac{1}{2}) }[/math] and [math]\displaystyle{ G(n,\frac{1}{3}) }[/math] are distributed the same as before, but now [math]\displaystyle{ e\in G(n,\frac{1}{3}) }[/math] implies [math]\displaystyle{ e\in G(n,\frac{1}{2}) }[/math] since [math]\displaystyle{ U_e\le\frac{1}{3}\Rightarrow U_e\le\frac{1}{2} }[/math]. Therefore if [math]\displaystyle{ G(n,\frac{1}{3}) }[/math] is connected then [math]\displaystyle{ G(n,\frac{1}{2}) }[/math] must be also connected. Then,
[math]\displaystyle{ \Pr\left[G(n,\frac{1}{2})\mbox{ is connected }\right]\ge\Pr\left[G(n,\frac{1}{3})\mbox{ is connected }\right] }[/math].

In the last example, we see that coupling means forcing the two distributions use the same source of randomness. Now we see how this idea can help analyze the convergence of Markov chain.

A Markov chain is a sequence of random variables

[math]\displaystyle{ \begin{align}X_0,X_1,X_2\ldots\end{align} }[/math]

where the distribution of [math]\displaystyle{ X_0 }[/math] is given by an initial distribution [math]\displaystyle{ \pi^{(0)} }[/math]; and for each [math]\displaystyle{ t=1,2,\ldots }[/math], assuming that [math]\displaystyle{ X_{t-1}=i }[/math], the distribution of [math]\displaystyle{ X_t }[/math] is given by the [math]\displaystyle{ i }[/math]th row of the transition matrix [math]\displaystyle{ P_i }[/math].

So we can generate the chain by a sequence of uniform and independent random variables [math]\displaystyle{ U_0,U_1,\ldots }[/math] ranging over [math]\displaystyle{ [0,1] }[/math]. Initially

[math]\displaystyle{ X_0=j }[/math] if [math]\displaystyle{ \sum_{k\lt j}\pi^{(0)}_k\le U_0\lt \sum_{k\le j}\pi^{(0)}_k }[/math];

and for each [math]\displaystyle{ t=1,2,\ldots }[/math], assuming [math]\displaystyle{ X_{t-1}=i }[/math],

[math]\displaystyle{ X_0=j }[/math] if [math]\displaystyle{ \sum_{k\lt j}P_{i,k}\le U_0\lt \sum_{k\le j}P(i,k) }[/math].

The Markov chain generated in this way is distributed exactly the same as having initial distribution [math]\displaystyle{ \pi^{(0)} }[/math] and transition matrix [math]\displaystyle{ P }[/math].

Let [math]\displaystyle{ X_0,X_1,\ldots }[/math] be a finite Markov chain with initial distribution [math]\displaystyle{ \pi^{(0)} }[/math] and transition matrix [math]\displaystyle{ P }[/math], and generated by the uniform and independent random variables [math]\displaystyle{ U_0,U_1,\ldots }[/math]. Suppose that the Markov chain has a stationary distribution [math]\displaystyle{ \pi }[/math], such that [math]\displaystyle{ \pi P=\pi }[/math]. We run another chain [math]\displaystyle{ X_0',X_1',\ldots }[/math] with the initial distribution [math]\displaystyle{ \pi }[/math], transition matrix [math]\displaystyle{ P }[/math], and independent random sources [math]\displaystyle{ U_0',U_1',\ldots }[/math]. So we have two independent sequences:

[math]\displaystyle{ \begin{align} X_0,X_1,X_2\ldots\end{align} }[/math] and [math]\displaystyle{ \begin{align}X_0',X_1',X_2'\ldots\end{align} }[/math].

We define another chain, which starts as [math]\displaystyle{ \begin{align}X_0,X_1,X_2\ldots\end{align} }[/math] and for the first time that [math]\displaystyle{ X_n=X_n' }[/math], the chain switches to [math]\displaystyle{ X_n',X_{n+1}',X_{n+2},\ldots }[/math]. The transitions are illustrated by the following figure.

[math]\displaystyle{ \begin{matrix} \pi^{(0)}:& X_0 & \rightarrow & X_1 & \rightarrow & \cdots & \rightarrow & X_{n} \\ &\nparallel & & \nparallel & & \nparallel & & \parallel & \searrow\\ \pi^{}:& X_0' & \rightarrow & X_1' & \rightarrow & \cdots & \rightarrow & X_{n}' & \rightarrow & X_{n+1}' \rightarrow X_{n+2}' \rightarrow \cdots \end{matrix} }[/math]

It is not hard to see that the distribution of the chain [math]\displaystyle{ \begin{align}X_0,X_1,\ldots, X_{n}, X_{n+1}',X_{n+2}',\ldots\end{align} }[/math] is identically distributed as the original chain [math]\displaystyle{ \begin{align}X_0,X_1,\ldots\end{align} }[/math], since we do nothing except switching the source of randomness from [math]\displaystyle{ U_0,U_1,\ldots, }[/math] to the sequence [math]\displaystyle{ U_n',U_{n+1}',\ldots }[/math], which does not affect the distribution of the chain.

On the other hand, since the chain [math]\displaystyle{ \begin{align}X_0',X_1',\ldots\end{align} }[/math] starts from a stationary distribution [math]\displaystyle{ \pi }[/math], by the definition of stationary distribution, it will stay in that distribution forever. Thus, the distribution of every one of [math]\displaystyle{ \begin{align}X_{n+1}',X_{n+2}',\ldots\end{align} }[/math] is [math]\displaystyle{ \pi }[/math]. Therefore, once [math]\displaystyle{ X_n=X_n' }[/math] for a finite [math]\displaystyle{ n }[/math], the chain [math]\displaystyle{ \begin{align}X_0,X_1,\ldots\end{align} }[/math] converges to the stationary distribution [math]\displaystyle{ \pi }[/math].

For irreducible aperiodic Markov chain with finite state space, starting from an arbitrary initial distribution (including the stationary distribution), after running the chain for a finite time [math]\displaystyle{ n }[/math], [math]\displaystyle{ X_n }[/math] is possible to be any state, that is [math]\displaystyle{ \Pr[X_n=i]\gt 0 }[/math] for any [math]\displaystyle{ i }[/math] (we will not formally prove this, but it is not very hard to imagine why this is true). This implies that [math]\displaystyle{ X_n=X_n' }[/math] must happen for a finite [math]\displaystyle{ n }[/math], which supports the convergence of the Markov chain.

Hitting time and the stationary distribution

We will see that the stationary distribution of a Markov chain is related to its hitting times. For a Markov chain starting from state [math]\displaystyle{ i }[/math], let

[math]\displaystyle{ T_{i,j}=\min\{n\gt 0\mid X_n=j\} }[/math],

which is the first time that a chain starting from state [math]\displaystyle{ i }[/math] visits state [math]\displaystyle{ j }[/math], with the convention that [math]\displaystyle{ T_{i,j}=\infty }[/math] if the chain never visit [math]\displaystyle{ j }[/math]. We define the hitting time

[math]\displaystyle{ \tau_{i,j}=\mathbf{E}[T_{i,j}] }[/math].

The special case [math]\displaystyle{ \tau_{i,i} }[/math] gives the expected time a chain starting from state [math]\displaystyle{ i }[/math] returns to state [math]\displaystyle{ i }[/math].

Theorem
Any irreducible aperiodic Markov chain with finite state space [math]\displaystyle{ \mathcal{S} }[/math], and transition matrix [math]\displaystyle{ P }[/math] has a stationary distribution [math]\displaystyle{ \pi }[/math] such that
[math]\displaystyle{ \pi_i=\lim_{n\rightarrow\infty}(P^n)_{j,i}=\frac{1}{\tau_{i,i}} }[/math] for any [math]\displaystyle{ i\in\mathcal{S} }[/math].

Note that in the above theorem, the limit [math]\displaystyle{ \lim_{n\rightarrow\infty}(P^n)_{j,i} }[/math] does not depend on the [math]\displaystyle{ j }[/math], which means that [math]\displaystyle{ P^n }[/math] in the limit has identical rows.

We will not prove the lemma, but only give an informal justification: the expected time between visits to state [math]\displaystyle{ i }[/math] is [math]\displaystyle{ \tau_{i,i} }[/math], and therefore state [math]\displaystyle{ i }[/math] is visited [math]\displaystyle{ \frac{1}{\tau_{i,i}} }[/math] of the time. Not that [math]\displaystyle{ \lim_{n\rightarrow\infty}(P^n)_{j,i} }[/math] represents the probability a state chosen far in the future (at time [math]\displaystyle{ n\rightarrow\infty }[/math]) is at state [math]\displaystyle{ i }[/math] when the chain starts at state [math]\displaystyle{ j }[/math], but if the future is far, who [math]\displaystyle{ j }[/math] is does not really matter, and [math]\displaystyle{ \lim_{n\rightarrow\infty}(P^n)_{j,i} }[/math] is the frequency that [math]\displaystyle{ i }[/math] is visited, which is [math]\displaystyle{ \frac{1}{\tau_{i,i}} }[/math].

PageRank

PageRank is the algorithm reportedly used by Google to assign a numerical rank to every web page. The rank of a page measures the "importance" of the page. A page has higher rank if it is pointed to by more high-rank pages. Low-rank pages have less influence on the rank of a page. If one page points to many others, it will have less influence on their ranking than if it just points to a few.

This intuitive idea can be formalized as follows. The world-wide-web is treated as a directed graph [math]\displaystyle{ G(V,E) }[/math], with web pages as vertices and hyperlinks as directed edges. The rank of vertex [math]\displaystyle{ v }[/math] is denoted [math]\displaystyle{ r(v) }[/math], and is supposed to satisfy:

[math]\displaystyle{ r(v)=\sum_{u:(u,v)\in E}\frac{r(u)}{d_+(u)}, \qquad (*) }[/math]

where [math]\displaystyle{ d_+(u) }[/math] is the number of edges going out of [math]\displaystyle{ u }[/math]. Note that the sum is over edges going in to [math]\displaystyle{ v }[/math].

This formula nicely models both the intuitions that a page gets higher rank if it is pointed by more high-rank pages, and that the influence of a page is penalized by the number of pages it points to. Let [math]\displaystyle{ P }[/math] be a matrix with rows and columns corresponded to vertices, and [math]\displaystyle{ \forall u,v\in V }[/math],

[math]\displaystyle{ P(u,v)=\begin{cases} \frac{1}{d_+(u)}& \mbox{if }(u,v)\in E,\\ 0& \mbox{otherwise}. \end{cases} }[/math]

Then the formular [math]\displaystyle{ \begin{align}(*)\end{align} }[/math] can be expressed as

[math]\displaystyle{ \begin{align} rP=r. \end{align} }[/math]

It is also easy to verify that [math]\displaystyle{ P }[/math] is stochastic, that is, [math]\displaystyle{ \begin{align}\sum_{v}P(u,v)=1\end{align} }[/math] for all [math]\displaystyle{ u\in V }[/math]. Then the ranks of a pages is actually a stationary distribution of the Markov chain with transition matrix [math]\displaystyle{ P }[/math]. This is not entirely a coincidence. [math]\displaystyle{ P }[/math] is the transition matrix for the random walk over the web pages, defined as that at each time, pick a uniform page pointed by the current page and walk to it. This can be imagined as a "tireless random surfer" who starts from an arbitrary page, randomly follows the hyperlinks, and given infinitely long time, will eventually approaches the stationary distribution. The importance of a web page is reflected by the frequency that the random surfer visits the page, which is the stationary probability.

We assume the world-wide-web is strongly connected, thus the Markov chain is irreducible. And given the huge number of webpages over the Internet, it is almost impossible that the lengths of all cycles have a common divisor greater than 1, thus the Markov chain is aperiodic. Therefore, the random surfer model indeed converges.

In practice, PageRank also consider a damping factor, since a typical surfer cannot browse the web forever. The damping factor effectively gives an upper bound on the number of hyperlinks the surfer would follow before he/she has a random start over.