Randomized Algorithms (Spring 2010)/Random sampling: Difference between revisions
imported>WikiSysop |
imported>WikiSysop |
||
Line 5: | Line 5: | ||
== Conductance == | == Conductance == | ||
=== Recap === | === Recap === | ||
* A Markov chain with finite space <math>\mathcal{S}</math>, where <math>N=|\mathcal{S}|</math>, transition matrix <math>P</math>, whose eigenvalues are <math>\lambda_1\ge\cdots\ge\lambda_N</math>, stationary distribution <math>\pi</math>. | |||
* '''Mixing time''' <math>\tau(\epsilon)\,</math>: time to be close to <math>\pi</math> within total variation distance <math>\epsilon</math>, starting from a worst-case state. | |||
Conditions: | |||
* '''Lazy random walk''': <math>P_{i,i}\ge\frac{1}{2}</math> for any <math>i\in\mathcal{S}</math>, so <math>1=\lambda_1\ge\cdots\ge\lambda_N\ge 0</math> and <math>\lambda_{\max}=\max(|\lambda_2|,|\lambda_N|)=\lambda_2\,</math>. | |||
* The Markov chain is '''time-reversible''': <math>\pi_{i}P_{i,j}=\pi_{j}P_{j,i}</math> for all <math>i,j\in\mathcal{S}</math>. | |||
* The stationary <math>\pi</math> is the '''uniform distribution''', that is, <math>\pi_i=\frac{1}{N}</math> for all <math>i\in\mathcal{S}</math>. | |||
Then: | |||
{|border="1" | |||
|'''Theorem''' | |||
:The mixing time <math>\tau(\epsilon)=O\left(\frac{\ln N+\ln(1/\epsilon)}{1-\lambda_2}\right) </math> | |||
|} | |||
=== Conductance and the mixing time === | === Conductance and the mixing time === |
Revision as of 07:43, 11 May 2010
Random sampling
Conductance
Recap
- A Markov chain with finite space [math]\displaystyle{ \mathcal{S} }[/math], where [math]\displaystyle{ N=|\mathcal{S}| }[/math], transition matrix [math]\displaystyle{ P }[/math], whose eigenvalues are [math]\displaystyle{ \lambda_1\ge\cdots\ge\lambda_N }[/math], stationary distribution [math]\displaystyle{ \pi }[/math].
- Mixing time [math]\displaystyle{ \tau(\epsilon)\, }[/math]: time to be close to [math]\displaystyle{ \pi }[/math] within total variation distance [math]\displaystyle{ \epsilon }[/math], starting from a worst-case state.
Conditions:
- Lazy random walk: [math]\displaystyle{ P_{i,i}\ge\frac{1}{2} }[/math] for any [math]\displaystyle{ i\in\mathcal{S} }[/math], so [math]\displaystyle{ 1=\lambda_1\ge\cdots\ge\lambda_N\ge 0 }[/math] and [math]\displaystyle{ \lambda_{\max}=\max(|\lambda_2|,|\lambda_N|)=\lambda_2\, }[/math].
- The Markov chain is time-reversible: [math]\displaystyle{ \pi_{i}P_{i,j}=\pi_{j}P_{j,i} }[/math] for all [math]\displaystyle{ i,j\in\mathcal{S} }[/math].
- The stationary [math]\displaystyle{ \pi }[/math] is the uniform distribution, that is, [math]\displaystyle{ \pi_i=\frac{1}{N} }[/math] for all [math]\displaystyle{ i\in\mathcal{S} }[/math].
Then:
Theorem
|
Conductance and the mixing time
For many problems, such as card shuffling, the state space is exponentially large, so the estimation of [math]\displaystyle{ \lambda_2 }[/math] becomes very difficult. The following technique based on conductance is due to Jerrum and Sinclair, which is fundamental for the theory of rapid mixing random walks.
Definition (conductance)
|
The definition of conductance looks quite similar to the expansion ratio of graphs. In fact, for the random walk on a undirected [math]\displaystyle{ d }[/math]-regular graph [math]\displaystyle{ G }[/math], there is a straightforward relation between the conductance [math]\displaystyle{ \Phi }[/math] of the walk and the expansion ratio [math]\displaystyle{ \phi(G) }[/math] of the graph
- [math]\displaystyle{ \Phi=\frac{\phi(G)}{d}. }[/math]
Very informally, the conductance can be seen as the weighted normalized version of expansion ratio.
The following theorem states a Cheeger's inequality for the conductance.
Theorem (Jerrum-Sinclair 1988)
|
The inequality can be equivalent written for the spectral gap:
- [math]\displaystyle{ \frac{\Phi^2}{2}\le1-\lambda_2\le 2\Phi }[/math]
thus a large conductance implies a large spectral gap, which in turn implies the rapid mixing of the random walk.