Randomized Algorithms (Spring 2010)/Markov chains and random walks: Difference between revisions
Jump to navigation
Jump to search
imported>WikiSysop |
imported>WikiSysop |
||
Line 1: | Line 1: | ||
== Markov Chains == | == Markov Chains == | ||
=== The Markov property and transition matrices === | === The Markov property and transition matrices === | ||
{|border="1" | |||
|'''Definition (the Markov property)''' | |||
: A process <math>X_0,X_1,\ldots</math> satisfies the '''Markov property''' if | |||
::<math> | |||
\Pr[X_{n+1}=x_{n+1}\mid X_{0}=x_{0}, X_{1}=x_{1},\ldots,X_{n}=x_{n}]=\Pr[X_{n+1}=x_{n+1}\mid X_{n}=x_{n}] | |||
</math> | |||
:for all <math>n</math> and all <math>x_0,\ldots,x_{n+1}\in \mathcal{S}</math>. | |||
|} | |||
The Markov property describes the ''memoryless'' property of a Markov chain: "conditioning on the present, the future does not depend on the past." | |||
A ''discrete time'' stochastic process <math>X_0,X_1,\ldots</math> is a '''Markov chain''' if it has the Markov property. | |||
=== Stationary distributions === | === Stationary distributions === |
Revision as of 09:11, 27 April 2010
Markov Chains
The Markov property and transition matrices
Definition (the Markov property)
|
The Markov property describes the memoryless property of a Markov chain: "conditioning on the present, the future does not depend on the past."
A discrete time stochastic process [math]\displaystyle{ X_0,X_1,\ldots }[/math] is a Markov chain if it has the Markov property.
Stationary distributions
Theorem (Basic limit theorem)
|