Randomized Algorithms (Spring 2010)/Markov chains and random walks: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
imported>WikiSysop
Line 1: Line 1:
== Markov Chains ==
== Markov Chains ==


=== The Markov property and transition matrices ===  
=== The Markov property and transition matrices ===
{|border="1"
|'''Definition (the Markov property)'''
: A process <math>X_0,X_1,\ldots</math> satisfies the '''Markov property''' if
::<math>
\Pr[X_{n+1}=x_{n+1}\mid X_{0}=x_{0}, X_{1}=x_{1},\ldots,X_{n}=x_{n}]=\Pr[X_{n+1}=x_{n+1}\mid X_{n}=x_{n}]
</math>
:for all <math>n</math> and all <math>x_0,\ldots,x_{n+1}\in \mathcal{S}</math>.
|}
 
The Markov property describes the ''memoryless'' property of a Markov chain: "conditioning on the present, the future does not depend on the past."
 
A ''discrete time'' stochastic process <math>X_0,X_1,\ldots</math> is a '''Markov chain''' if it has the Markov property.


=== Stationary distributions ===
=== Stationary distributions ===

Revision as of 09:11, 27 April 2010

Markov Chains

The Markov property and transition matrices

Definition (the Markov property)
A process [math]\displaystyle{ X_0,X_1,\ldots }[/math] satisfies the Markov property if
[math]\displaystyle{ \Pr[X_{n+1}=x_{n+1}\mid X_{0}=x_{0}, X_{1}=x_{1},\ldots,X_{n}=x_{n}]=\Pr[X_{n+1}=x_{n+1}\mid X_{n}=x_{n}] }[/math]
for all [math]\displaystyle{ n }[/math] and all [math]\displaystyle{ x_0,\ldots,x_{n+1}\in \mathcal{S} }[/math].

The Markov property describes the memoryless property of a Markov chain: "conditioning on the present, the future does not depend on the past."

A discrete time stochastic process [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a Markov chain if it has the Markov property.

Stationary distributions

Theorem (Basic limit theorem)
Let [math]\displaystyle{ X_0,X_1,\ldots, }[/math] be an irreducible, aperiodic Markov chain having a stationary distribution [math]\displaystyle{ \pi }[/math]. Let [math]\displaystyle{ X_0 }[/math] have the distribution [math]\displaystyle{ \pi_0 }[/math], an arbitrary initial distribution. Then
[math]\displaystyle{ \lim_{n\rightarrow\infty}\pi_n(i)=\pi(i) }[/math]
for all states [math]\displaystyle{ i }[/math].

Coupling

Random Walks on Graphs

Hitting and covering

Mixing