Randomized Algorithms (Spring 2010)/Random sampling: Difference between revisions
imported>WikiSysop |
imported>WikiSysop |
||
Line 40: | Line 40: | ||
{|border="1" | {|border="1" | ||
|''' | |'''Lemma (Jerrum-Sinclair 1988)''' | ||
:In a reversible Markov chain, <math>1-2\Phi\le\lambda_2\le 1-\frac{\Phi^2}{2}</math>. | :In a time-reversible Markov chain, <math>1-2\Phi\le\lambda_2\le 1-\frac{\Phi^2}{2}</math>. | ||
|} | |} | ||
Line 47: | Line 47: | ||
:<math>\frac{\Phi^2}{2}\le1-\lambda_2\le 2\Phi</math> | :<math>\frac{\Phi^2}{2}\le1-\lambda_2\le 2\Phi</math> | ||
thus a large conductance implies a large spectral gap, which in turn implies the rapid mixing of the random walk. | thus a large conductance implies a large spectral gap, which in turn implies the rapid mixing of the random walk. | ||
{|border="1" | |||
|'''Proposition''' | |||
:For a time-reversible Markov chain with finite state space <math>\mathcal{S}</math>, transition matrix <math>P</math>, and conductance <math>\Phi</math>, if <math>P_{i,i}\ge\frac{1}{2}</math> for all state <math>i\in\mathcal{S}</math>, and the stationary distribution <math>\pi</math> is uniform, then the mixing time | |||
::<math>\tau(\epsilon)=O\left(\frac{\ln N+\ln(1/\epsilon)}{\Phi^{2}}\right)</math> | |||
:where <math>N=|\mathcal{S}|</math>. | |||
|} | |||
===Canonical paths === | ===Canonical paths === | ||
== Counting via sampling == | == Counting via sampling == |
Revision as of 12:46, 11 May 2010
Random sampling
Conductance
Recap
- A Markov chain with finite space [math]\displaystyle{ \mathcal{S} }[/math], where [math]\displaystyle{ N=|\mathcal{S}| }[/math], transition matrix [math]\displaystyle{ P }[/math], whose eigenvalues are [math]\displaystyle{ \lambda_1\ge\cdots\ge\lambda_N }[/math], stationary distribution [math]\displaystyle{ \pi }[/math].
- The mixing time [math]\displaystyle{ \tau(\epsilon)\, }[/math]: time to be close to [math]\displaystyle{ \pi }[/math] within total variation distance [math]\displaystyle{ \epsilon }[/math], starting from a worst-case state.
Conditions:
- Lazy random walk: [math]\displaystyle{ P_{i,i}\ge\frac{1}{2} }[/math] for any [math]\displaystyle{ i\in\mathcal{S} }[/math], so [math]\displaystyle{ 1=\lambda_1\ge\cdots\ge\lambda_N\ge 0 }[/math] and [math]\displaystyle{ \lambda_{\max}=\max(|\lambda_2|,|\lambda_N|)=\lambda_2\, }[/math].
- The Markov chain is time-reversible: [math]\displaystyle{ \pi_{i}P_{i,j}=\pi_{j}P_{j,i} }[/math] for all [math]\displaystyle{ i,j\in\mathcal{S} }[/math].
- The stationary [math]\displaystyle{ \pi }[/math] is the uniform distribution, that is, [math]\displaystyle{ \pi_i=\frac{1}{N} }[/math] for all [math]\displaystyle{ i\in\mathcal{S} }[/math].
Then:
Theorem
|
Conductance and the mixing time
For many problems, such as card shuffling, the state space is exponentially large, so the estimation of [math]\displaystyle{ \lambda_2 }[/math] becomes very difficult. The following technique based on conductance overcomes this issue by considering the conductance of a Markov chain.
Definition (conductance)
|
The definition of conductance looks quite similar to the expansion ratio of graphs. In fact, for the random walk on a undirected [math]\displaystyle{ d }[/math]-regular graph [math]\displaystyle{ G }[/math], there is a straightforward relation between the conductance [math]\displaystyle{ \Phi }[/math] of the walk and the expansion ratio [math]\displaystyle{ \phi(G) }[/math] of the graph
- [math]\displaystyle{ \Phi=\frac{\phi(G)}{d}. }[/math]
Very informally, the conductance can be seen as the weighted normalized version of expansion ratio.
The following theorem states a Cheeger's inequality for the conductance.
Lemma (Jerrum-Sinclair 1988)
|
The inequality can be equivalent written for the spectral gap:
- [math]\displaystyle{ \frac{\Phi^2}{2}\le1-\lambda_2\le 2\Phi }[/math]
thus a large conductance implies a large spectral gap, which in turn implies the rapid mixing of the random walk.
Proposition
|