Randomized Algorithms (Spring 2010)/Markov chains and random walks: Difference between revisions
Jump to navigation
Jump to search
imported>WikiSysop |
imported>WikiSysop |
||
Line 16: | Line 16: | ||
=== Stationary distributions === | === Stationary distributions === | ||
{|border="1" | |||
|'''Definition (stationary distribution)''' | |||
: A '''stationary distribution''' of a Markov chain is a probability distribution <math>\pi</math> such that | |||
::<math>\begin{align}\pi P=\pi\end{align}</math>. | |||
|} | |||
=== The basic limit theorem === | === The basic limit theorem === |
Revision as of 09:34, 27 April 2010
Markov Chains
The Markov property and transition matrices
Definition (the Markov property)
|
The Markov property describes the memoryless property of a Markov chain: "conditioning on the present, the future does not depend on the past."
A discrete time stochastic process [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a Markov chain if it has the Markov property.
Stationary distributions
Definition (stationary distribution)
|
The basic limit theorem
Theorem (Basic limit theorem)
|