Randomized Algorithms (Spring 2010)/Random sampling: Difference between revisions

From TCS Wiki
Jump to navigation Jump to search
imported>WikiSysop
No edit summary
imported>WikiSysop
Line 4: Line 4:


== Conductance ==
== Conductance ==
== Recap ==
=== Recap ===





Revision as of 04:36, 11 May 2010

Random sampling

Conductance

Recap

Conductance and the mixing time

We see how the mixing time of a reversible Markov chain is related to [math]\displaystyle{ \lambda_\max }[/math]. But how to compute [math]\displaystyle{ \lambda_\max }[/math] of a Markov chain? There are two levels of complications:

  • Is [math]\displaystyle{ \lambda_\max=|\lambda_2| }[/math] or [math]\displaystyle{ \lambda_\max=|\lambda_n| }[/math]?
This is rather easy to solve, by considering lazy walks. For a time-reversion Markov chain with transition matrix [math]\displaystyle{ P }[/math], we can make a lazy random walk out of it by letting [math]\displaystyle{ Q=\frac{1}{2}(I+P) }[/math] be the new transition matrix. It is easy to see that [math]\displaystyle{ Q }[/math] is also time-reversible and has the same stationary distribution. A lovely property of [math]\displaystyle{ Q }[/math] is that it always has [math]\displaystyle{ \lambda_\max=\lambda_2 }[/math], which is easy to check. Therefore, from now on, we only consider lazy random walks and always assume that [math]\displaystyle{ \lambda_\max=\lambda_2 }[/math].
  • Another complication is that [math]\displaystyle{ \lambda_2 }[/math] is hard to estimate. For many problems, such as card shuffling, the state space is exponentially large, so the estimation of [math]\displaystyle{ \lambda_2 }[/math] becomes especially hard.
This problem is quite difficult. The following technique based on conductance is due to Jerrum and Sinclair, which is fundamental for the theory of rapid mixing random walks.
Definition (conductance)
The conductance of a irreducible Markov chain with finite state space [math]\displaystyle{ \Omega }[/math], transition matrix [math]\displaystyle{ P }[/math], and stationary distribution [math]\displaystyle{ \pi }[/math], is defined as
[math]\displaystyle{ \Phi=\min_{\overset{S\subset\Omega}{0\lt \pi(S)\le\frac{1}{2}}} \frac{\sum_{i\in S, j\not\in S}\pi_ip_{ij}}{\pi(S)} }[/math]
where [math]\displaystyle{ \pi(S)=\sum_{i\in S}\pi_i }[/math] is the probability density of [math]\displaystyle{ S\subset \Omega }[/math] under the stationary distribution [math]\displaystyle{ \pi }[/math].

The definition of conductance looks quite similar to the expansion ratio of graphs. In fact, for the random walk on a undirected [math]\displaystyle{ d }[/math]-regular graph [math]\displaystyle{ G }[/math], there is a straightforward relation between the conductance [math]\displaystyle{ \Phi }[/math] of the walk and the expansion ratio [math]\displaystyle{ \phi(G) }[/math] of the graph

[math]\displaystyle{ \Phi=\frac{\phi(G)}{d}. }[/math]

Very informally, the conductance can be seen as the weighted normalized version of expansion ratio.

The following theorem states a Cheeger's inequality for the conductance.

Theorem (Jerrum-Sinclair 1988)
In a reversible Markov chain, [math]\displaystyle{ 1-2\Phi\le\lambda_2\le 1-\frac{\Phi^2}{2} }[/math].

The inequality can be equivalent written for the spectral gap:

[math]\displaystyle{ \frac{\Phi^2}{2}\le1-\lambda_2\le 2\Phi }[/math]

thus a large conductance implies a large spectral gap, which in turn implies the rapid mixing of the random walk.

Canonical paths

Counting via sampling