随机算法 (Spring 2014)/Expander Graphs and Mixing and 随机算法 (Spring 2014)/Problem Set 4: Difference between pages

From TCS Wiki
(Difference between pages)
Jump to navigation Jump to search
imported>Etone
(Created page with "=Expander Graphs= According to wikipedia: :"[http://en.wikipedia.org/wiki/Expander_graph Expander graphs] have found extensive applications in computer science, in designing algo…")
 
imported>Etone
 
Line 1: Line 1:
=Expander Graphs=
== Problem 1 ==
According to wikipedia:
A proper <math>q</math>-coloring of a graph <math>G(V,E)</math> is a mapping <math>f:V\to [q]</math> such that for any edge <math>uv\in E</math> we have <math>f(u)\neq f(v)</math>.
:"[http://en.wikipedia.org/wiki/Expander_graph Expander graphs] have found extensive applications in computer science, in designing algorithms, [http://en.wikipedia.org/wiki/Error_correcting_code error correcting codes], [http://en.wikipedia.org/wiki/Extractor_(mathematics) extractors], [http://en.wikipedia.org/wiki/Pseudorandom_generator pseudorandom generators], [http://en.wikipedia.org/wiki/Sorting_network sorting networks] and robust computer networks. They have also been used in proofs of many important results in computational complexity theory, such as [http://en.wikipedia.org/wiki/SL_(complexity) SL]=[http://en.wikipedia.org/wiki/L_(complexity) L] and the [http://en.wikipedia.org/wiki/PCP_theorem PCP theorem]. In [http://en.wikipedia.org/wiki/Cryptography cryptography] too, expander graphs are used to construct hash functions."


We will not explore everything about expander graphs, but will focus on the performances of random walks on expander graphs.
Consider the following Markov chain for proper <math>q</math>-colorings of a graph <math>G(V,E)</math>:
 
{{Theorem|Markov Chain for Graph Coloring|
Consider an undirected (multi-)graph <math>G(V,E)</math>, where the parallel edges between two vertices are allowed.
:Start with a proper <math>q</math>-coloring of <math>G(V,E)</math>. At each step:
 
# Pick a vertex <math>v\in V</math> and a color <math>c\in[q]</math> uniformly at random.
Some notations:
# Change the color of <math>v</math> to <math>c</math> if the resulting coloring is proper; do nothing if otherwise.
* For <math>S,T\subset V</math>, let <math>E(S,T)=\{uv\in E\mid u\in S,v\in T\}</math>.
* The '''Edge Boundary''' of a set <math>S\subset V</math>, denoted <math>\partial S\,</math>, is <math>\partial S = E(S, \bar{S})</math>.
 
{{Theorem
|Definition (Graph expansion)|
:The '''expansion ratio''' of an undirected graph <math>G</math> on <math>n</math> vertices, is defined as
::<math>
\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}} \frac{|\partial S|}{|S|}.
</math>
}}
 
'''Expander graphs''' are '''<math>d</math>-regular''' (multi)graphs with <math>d=O(1)</math> and <math>\phi(G)=\Omega(1)</math>.
 
This definition states the following properties of expander graphs:
* Expander graphs are sparse graphs. This is because the number of edges is <math>dn/2=O(n)</math>.
* Despite the sparsity, expander graphs have good connectivity. This is supported by the expansion ratio.
* This one is implicit: expander graph is a ''family of graphs'' <math>\{G_n\}</math>, where <math>n</math> is the number of vertices. The asymptotic order <math>O(1)</math> and <math>\Omega(1)</math> in the definition is relative to the number of vertices <math>n</math>, which grows to infinity.
 
The following fact is directly implied by the definition.
:;An expander graph has diameter <math>O(\log n)</math>.
The proof is left for an exercise.
 
For a vertex set <math>S</math>, the size of the edge boundary <math>|\partial S|</math> can be seen as the "perimeter" of <math>S</math>, and <math>|S|</math> can be seen as the "volume" of <math>S</math>. The expansion property can be interpreted as a combinatorial version of [http://en.wikipedia.org/wiki/Isoperimetric_inequality isoperimetric inequality].
 
;Vertex expansion
:We can alternatively define the '''vertex expansion'''. For a vertex set <math>S\subset V</math>, its '''vertex boundary''', denoted <math>\delta S\,</math> is defined as that
:: <math>\delta S=\{u\not\in S\mid uv\in E \mbox{ and }v\in S\}</math>,
:and the '''vertex expansion''' of a graph <math>G</math> is <math>\psi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}} \frac{|\delta S|}{|S|}</math>.
 
== Existence of expander graph ==
We will show the existence of expander graphs by the probabilistic method. In order to do so, we need to generate random <math>d</math>-regular graphs.
 
Suppose that <math>d</math> is even. We can generate a random <math>d</math>-regular graph <math>G(V,E)</math> as follows:
* Let <math>V</math> be the vertex set. Uniformly and independently choose <math>\frac{d}{2}</math> cycles of <math>V</math>.
* For each vertex <math>v</math>, for every cycle, assuming that the two neighbors of <math>v</math> in that cycle is <math>w</math> and <math>u</math>, add two edges <math>wv</math> and <math>uv</math> to <math>E</math>.
 
The resulting <math>G(V,E)</math> is a multigraph. That is, it may have multiple edges between two vertices. We will show that <math>G(V,E)</math> is an expander graph with high probability. Formally, for some constant <math>d</math> and constant <math>\alpha</math>,
:<math>\Pr[\phi(G)\ge \alpha]=1-o(1)</math>.
 
By the probabilistic method, this shows that there exist expander graphs. In fact, the above probability bound shows something much stronger: it shows that almost every regular graph is an expander.
 
Recall that <math>\phi(G)=\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|}</math>. We call such <math>S\subset V</math> that <math>\frac{|\partial S|}{|S|}<\alpha</math> a "bad <math>S</math>". Then <math>\phi(G)< \alpha</math> if and only if there exists a bad <math>S</math> of size at most <math>\frac{n}{2}</math>. Therefore,
:<math>
\begin{align}
\Pr[\phi(G)< \alpha]
&=
\Pr\left[\min_{S:|S|\le\frac{n}{2}}\frac{|\partial S|}{|S|}<\alpha\right]\\
&=
\sum_{k=1}^\frac{n}{2}\Pr[\,\exists \mbox{bad }S\mbox{ of size }k\,]\\
&\le
\sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,]
\end{align}
</math>
Let <math>R\subset S</math> be the set of vertices in <math>S</math> which has neighbors in <math>\bar{S}</math>, and let <math>r=|R|</math>. It is obvious that <math>|\partial S|\ge r</math>, thus, for a bad <math>S</math>, <math>r<\alpha k</math>. Therefore, there are at most <math>\sum_{r=1}^{\alpha k}{k \choose r}</math> possible choices such <math>R</math>. For any fixed choice of <math>R</math>, the probability that an edge picked by a vertex in <math>S-R</math> connects to a vertex in <math>S</math> is at most <math>k/n</math>, and there are <math>d(k-r)</math> such edges. For any fixed <math>S</math> of size <math>k</math> and <math>R</math> of size <math>r</math>, the probability that all neighbors of all vertices in <math>S-R</math> are in <math>S</math> is at most <math>\left(\frac{k}{n}\right)^{d(k-r)}</math>. Due to the union bound, for any fixed <math>S</math> of size <math>k</math>,
:<math>
\begin{align}
\Pr[\,S\mbox{ is bad}\,]
&\le
\sum_{r=1}^{\alpha k}{k \choose r}\left(\frac{k}{n}\right)^{d(k-r)}
\le
\alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)}
\end{align}
</math>
Therefore,
:<math>
\begin{align}
\Pr[\phi(G)< \alpha]
&\le
\sum_{k=1}^\frac{n}{2}\sum_{S\in{V\choose k}}\Pr[\,S\mbox{ is bad}\,]\\
&\le
\sum_{k=1}^\frac{n}{2}{n\choose k}\alpha k {k \choose \alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)} \\
&\le
\sum_{k=1}^\frac{n}{2}\left(\frac{en}{k}\right)^k\alpha k \left(\frac{ek}{\alpha k}\right)^{\alpha k}\left(\frac{k}{n}\right)^{dk(1-\alpha)}&\quad (\mbox{Stirling formula }{n\choose k}\le\left(\frac{en}{k}\right)^k)\\
&\le
\sum_{k=1}^\frac{n}{2}\exp(O(k))\left(\frac{k}{n}\right)^{k(d(1-\alpha)-1)}.
\end{align}
</math>
The last line is <math>o(1)</math> when <math>d\ge\frac{2}{1-\alpha}</math>. Therefore, <math>G</math> is an expander graph with expansion ratio <math>\alpha</math> with high probability for suitable choices of constant <math>d</math> and constant <math>\alpha</math>.
 
== Computing graph expansion ==
Computation of graph expansion seems hard, because the definition involves the minimum over exponentially many subsets of vertices. In fact, the problem of deciding whether a graph is an expander is '''co-NP-complete'''. For a non-expander <math>G</math>, the vertex set <math>S\subset V</math> which has low expansion ratio is a proof of the fact that <math>G</math> is not an expander, which can be verified in poly-time. However, there is no efficient algorithm for computing the <math>\phi(G)</math> unless '''NP'''='''P'''.
 
The expansion ratio of a graph is closely related to the [http://en.wikipedia.org/wiki/Cut_(graph_theory)#Sparsest_cut sparsest cut] of the graph, which is the dual problem of the [http://en.wikipedia.org/wiki/Multi-commodity_flow_problem multicommodity flow] problem, both NP-complete. Studies of these two problems revolutionized the area of approximation algorithms.
 
We will see right now that although it is hard to compute the expansion ratio exactly, the expansion ratio can be approximated by some efficiently computable algebraic identity of the graph.
 
= Spectral Graph Theory=
== Graph spectrum==
The '''adjacency matrix''' of an <math>n</math>-vertex graph <math>G</math>, denoted <math>A = A(G)</math>, is an <math>n\times n</math> matrix where <math>A(u,v)</math> is the number of edges in <math>G</math> between vertex <math>u</math> and vertex <math>v</math>. Because <math>A</math> is a symmetric matrix with real entries, due to the [http://en.wikipedia.org/wiki/Perron%E2%80%93Frobenius_theorem Perron-Frobenius theorem], it has real eigenvalues <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>, which associate with an orthonormal system of eigenvectors <math>v_1,v_2,\ldots, v_n\,</math> with <math>Av_i=\lambda_i v_i\,</math>. We call the eigenvalues of <math>A</math> the '''spectrum''' of the graph <math>G</math>.
 
The spectrum of a graph contains a lot of information about the graph. For example, supposed that <math>G</math> is <math>d</math>-regular, the following lemma holds.
{{Theorem
|Lemma|
# <math>|\lambda_i|\le d</math> for all <math>1\le i\le n</math>.
# <math>\lambda_1=d</math> and the corresponding eigenvector is <math>(\frac{1}{\sqrt{n}},\frac{1}{\sqrt{n}},\ldots,\frac{1}{\sqrt{n}})</math>.
# <math>G</math> is connected if and only if <math>\lambda_1>\lambda_2</math>.
# If <math>G</math> is bipartite then <math>\lambda_1=-\lambda_n</math>.
}}
{{Proof| Let <math>A</math> be the adjacency matrix of <math>G</math>, with entries <math>a_{ij}</math>. It is obvious that <math>\sum_{j}a_{ij}=d\,</math> for any <math>j</math>.
*(1) Suppose that <math>Ax=\lambda x, x\neq \mathbf{0}</math>, and let <math>x_i</math> be an entry of <math>x</math> with the largest absolute value. Since <math>(Ax)_i=\lambda x_i</math>, we have
::<math>
\sum_{j}a_{ij}x_j=\lambda x_i,\,
</math>
:and so
::<math>
|\lambda||x_i|=\left|\sum_{j}a_{ij}x_j\right|\le \sum_{j}a_{ij}|x_j|\le \sum_{j}a_{ij}|x_i| \le d|x_i|.
</math>
:Thus <math>|\lambda|\le d</math>.
*(2) is easy to check.
*(3) Let <math>x</math> be the nonzero vector for which <math>Ax=dx</math>, and let <math>x_i</math> be an entry of <math>x</math> with the largest absolute value. Since <math>(Ax)_i=d x_i</math>, we have
::<math>
\sum_{j}a_{ij}x_j=d x_i.\,
</math>
:Since <math>\sum_{j}a_{ij}=d\,</math> and by the maximality of <math>x_i</math>, it follows that <math>x_j=x_i</math> for all <math>j</math> that <math>a_{ij}>0</math>. Thus, <math>x_i=x_j</math> if <math>i</math> and <math>j</math> are adjacent, which implies that <math>x_i=x_j</math> if <math>i</math> and <math>j</math> are connected. For connected <math>G</math>, all vertices are connected, thus all <math>x_i</math> are equal. This shows that if <math>G</math> is connected, the eigenvalue <math>d=\lambda_1</math> has multiplicity 1, thus <math>\lambda_1>\lambda_2</math>.
 
:If otherwise, <math>G</math> is disconnected, then for two different components, we have <math>Ax=dx</math> and <math>Ay=dy</math>, where the entries of <math>x</math> and <math>y</math> are nonzero only for the vertices in their components components. Then <math>A(\alpha x+\beta y)=d(\alpha x+\beta y)</math>. Thus, the multiplicity of <math>d</math> is greater than 1, so <math>\lambda_1=\lambda_2</math>.
 
*(4) If <math>G</math> if bipartite, then the vertex set can be partitioned into two disjoint nonempty sets <math>V_1</math> and <math>V_2</math> such that all edges have one endpoint in each of <math>V_1</math> and <math>V_2</math>. Algebraically, this means that the adjacency matrix can be organized into the form
::<math>
P^TAP=\begin{bmatrix}
0 & B\\
B^T & 0
\end{bmatrix}
</math>
:where <math>P</math> is a permutation matrix, which has no change on the eigenvalues.
:If <math>x</math> is an eigenvector corresponding to the eigenvalue <math>\lambda</math>, then <math>x'</math> which is obtained from <math>x</math> by changing the sign of the entries corresponding to vertices in <math>V_2</math>, is an eigenvector corresponding to the eigenvalue <math>-\lambda</math>. It follows that the spectrum of a bipartite graph is symmetric with respect to 0.
}}
 
== Cheeger's Inequality ==
One of the most exciting results in spectral graph theory is the following theorem which relate the graph expansion to the spectral gap.
{{Theorem
|Theorem (Cheeger's inequality)|
:Let <math>G</math> be a <math>d</math>-regular graph with spectrum <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>. Then
::<math>
\frac{d-\lambda_2}{2}\le \phi(G) \le \sqrt{2d(d-\lambda_2)}.
</math>
}}
The theorem was first stated for Riemannian manifolds, and was proved by Cheeger and Buser (for different directions of the inequalities). The discrete case is proved independently by Dodziuk and Alon-Milman.
 
For a <math>d</math>-regular graph, the quantity <math>(d-\lambda_2)</math> is called the '''spectral gap'''. The name is due to the fact that it is the gap between the first and the second largest eigenvalues of a graph.
 
If we write <math>\alpha=1-\frac{\lambda_2}{d}</math> (sometimes it is called the normalized spectral gap), the Cheeger's inequality is turned into a nicer form:
:<math>
\frac{\alpha}{2}\le \frac{\phi}{d}\le\sqrt{2\alpha} </math> or equivalently <math>
\frac{1}{2}\left(\frac{\phi}{d}\right)^2\le \alpha\le 2\left(\frac{\phi}{d}\right)
</math>.
 
=== Optimization Characterization of Eigenvalues ===
{{Theorem
|Theorem (Rayleigh-Ritz theorem)|
:Let <math>A</math> be a symmetric <math>n\times n</math> matrix. Let <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math> be the eigen values of <math>A</math> and <math>v_1,v_2,\ldots,v_n</math> be the corresponding eigenvectors. Then
::<math>
\begin{align}
\lambda_1
&=\max_{x\in\mathbb{R}^n}\frac{x^TAx}{x^Tx}
\end{align}
</math>  and  <math>
\begin{align}
\lambda_2
&=\max_{x\bot v_1}\frac{x^TAx}{x^Tx}.
\end{align}
</math>
}}
{{Proof|
Without loss of generality, we may assume that <math>v_1,v_2,\ldots,v_n</math> are orthonormal eigen-basis.
Then it holds that
:<math>\frac{v_1^TAv_1}{v_1^Tv_1}=\lambda_1v_1^Tv_1=\lambda_1</math>,
thus we have <math>\max_{x\in\mathbb{R}^n}\frac{x^TAx}{x^Tx}\ge\lambda_1</math>.
 
Let <math>x\in\mathbb{R}^n</math> be an arbitrary vector and let <math>y=\frac{x}{\sqrt{x^Tx}}=\frac{x}{\|x\|}</math> be its normalization.
Since <math>v_1,v_2,\ldots,v_n</math> are orthonormal basis, <math>y</math> can be expressed as <math>y=\sum_{i=1}^nc_iv_i</math>. Then
:<math>
\begin{align}
\frac{x^TAx}{x^Tx}
&=y^TAy
=\left(\sum_{i=1}^nc_iv_i\right)^TA\left(\sum_{i=1}^nc_iv_i\right)
=\left(\sum_{i=1}^nc_iv_i\right)^T\left(\sum_{i=1}^n\lambda_ic_iv_i\right)\\
&=\sum_{i=1}^n\lambda_ic_i^2
\le\lambda_1\sum_{i=1}^nc_i^2
=\lambda_1\|y\|
=\lambda_1.
\end{align}
</math>
Therefore, <math>\max_{x\in\mathbb{R}^n}\frac{x^TAx}{x^Tx}\le\lambda_1</math>. Altogether we have <math>\max_{x\in\mathbb{R}^n}\frac{x^TAx}{x^Tx}=\lambda_1</math>
 
It is similar to prove <math>\max_{x\bot v_1}\frac{x^TAx}{x^Tx}=\lambda_2</math>. In the first part take <math>x=v_2</math> to show that <math>\max_{x\bot v_1}\frac{x^TAx}{x^Tx}\ge\lambda_2</math>; and in the second part take an arbitrary <math>x\bot v_1</math> and <math>y=\frac{x}{\|x\|}</math>. Notice that <math>y\bot v_1</math>, thus <math>y=\sum_{i=1}^nc_iv_i</math> with <math>c_1=0</math>.
}}
 
The Rayleigh-Ritz Theorem is a special case of a fundamental theorem in linear algebra, called the [http://en.wikipedia.org/wiki/Min-max_theorem Courant-Fischer theorem], which characterizes the eigenvalues of a symmetric matrix by a series of optimizations:
{{Theorem
|Theorem (Courant-Fischer theorem)|
:Let <math>A</math> be a symmetric matrix with eigenvalues <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>. Then
::<math>
\begin{align}
\lambda_k
&=\max_{v_1,v_2,\ldots,v_{n-k}\in \mathbb{R}^n}\min_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{n-k}}}\frac{x^TAx}{x^Tx}\\
&=
\min_{v_1,v_2,\ldots,v_{k-1}\in \mathbb{R}^n}\max_{\overset{x\in\mathbb{R}^n, x\neq \mathbf{0}}{x\bot v_1,v_2,\ldots,v_{k-1}}}\frac{x^TAx}{x^Tx}.
\end{align}
</math>
}}
 
=== Graph Laplacian ===
Let <math>G(V,E)</math> be a <math>d</math>-regular graph of <math>n</math> vertices and let <math>A</math> be its adjacency matrix. We define <math>L=dI-A</math> to be the '''Laplacian''' of the graph <math>G</math>. Take <math>x\in \mathbb{R}^V</math> as a distribution over vertices, its Laplacian quadratic form <math>x^TLx</math> measures the "smoothness" of <math>x</math> over the graph topology, just as what the [http://en.wikipedia.org/wiki/Laplace_operator Laplacian operator] does to the differentiable functions.
 
{{Theorem|Laplacian Property|
:For any vector <math>x\in\mathbb{R}^n</math>, it holds that
::<math>x^TLx=\sum_{uv\in E}(x_u-x_v)^2</math>.
}}
{{Proof|
:<math>\begin{align}
x^TLx
&=
\sum_{u,v\in V}x_u(dI-A)_{uv}x_v\\
&=
\sum_{u}\left(dx_u^2-\sum_{uv\in E}x_ux_v\right)\\
&=
\sum_{u\in V}\sum_{uv\in E}(x_u^2-x_ux_v).
\end{align}
</math>
On the other hand,
:<math>
\begin{align}
\sum_{uv\in E}(x_u-x_v)^2
&=
\sum_{uv\in E}\left(x_u^2-2x_ux_v+x_v^2\right)\\
&=
\sum_{uv\in E}\left((x_u^2-x_ux_v)+(x_v^2-x_vx_u)\right)\\
&=
\sum_{u\in V}\sum_{uv\in E}(x_u^2-x_ux_v).
\end{align}
</math>
}}
 
Applying the Rayleigh-Ritz theorem to the Laplacian matrix of the graph, we have the following "variational characterization" of the spectral gap <math>d-\lambda_2</math>.
 
{{Theorem
|Theorem (Variational Characterization)|
:Let <math>G(V,E)</math> be a <math>d</math>-regular graph of <math>n</math> vertices. Suppose that its adjacency matrix is <math>A</math>, whose eigenvalues are <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>. Let <math>L=dI-A</math> be the Laplacian matrix. Then
::<math>
\begin{align}
d-\lambda_2
&=\min_{x\bot \boldsymbol{1}}\frac{x^TLx}{x^Tx}
=\min_{x\bot \boldsymbol{1}}\frac{\sum_{uv\in E}(x_u-x_v)^2}{\sum_{v\in V}x_v^2}.
\end{align}
</math>
}}
{{proof|
For <math>d</math>-regular graph, we know that <math>\lambda_1=d</math> and <math>\boldsymbol{1}A=d\boldsymbol{1}</math>, thus <math>\boldsymbol{1}</math> is the eigenvector of <math>\lambda_1</math>. Due to Rayleigh-Ritz Theorem, it holds that <math>\lambda_2
=\max_{x\bot \boldsymbol{1}}\frac{x^TAx}{x^Tx}</math>. Then
:<math>
\begin{align}
\min_{x\bot \boldsymbol{1}}\frac{x^TLx}{x^Tx}
&=\min_{x\bot \boldsymbol{1}}\frac{x^T(dI-A)x}{x^Tx}\\
&=\min_{x\bot \boldsymbol{1}}\frac{dx^Tx-x^TAx}{x^Tx}\\
&=\min_{x\bot \boldsymbol{1}}\left(d-\frac{x^TAx}{x^Tx}\right)\\
&=d-\max_{x\bot \boldsymbol{1}}\frac{x^TAx}{x^Tx}\\
&=d-\lambda_2.
\end{align}
</math>
We know it holds for the graph Laplacian that <math>x^TLx=\sum_{uv\in E}(x_u-x_v)^2</math>. So the variational characterization of the second eigenvalue of graph is proved.
}}
 
=== Proof of Cheeger's Inequality ===
We will first give an informal explanation why Cheeger's inequality holds.
 
Recall that the expansion is defined as
:<math>
\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}}\frac{|\partial S|}{|S|}.
</math>
Let <math>\chi_S</math> be the '''characteristic vector''' of the set <math>S</math> such that
:<math>\chi_S(v)=\begin{cases}
1 & v\in S,\\
0 & v\not\in S.
\end{cases}</math>
It is easy to see that
:<math>
\frac{\sum_{uv\in E}(\chi_S(u)-\chi_S(v))^2}{\sum_{v\in V}\chi_S(v)^2}=\frac{|\partial S|}{|S|}.
</math>
Thus, the expansion can be expressed algebraically as
:<math>
\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}}\frac{\sum_{uv\in E}(\chi_S(u)-\chi_S(v))^2}{\sum_{v\in V}\chi_S(v)^2}=\min_{\overset{x\in\{0,1\}^n}{\|x\|_1\le\frac{n}{2}}}\frac{\sum_{uv\in E}(x_u-x_v)^2}{\sum_{v\in V}x_v^2}.
</math>
On the other hand, due to the variational characterization of the spectral gap, we have
:<math>
d-\lambda_2=\min_{x\bot\boldsymbol{1}}\frac{\sum_{uv\in E}(x_u-x_v)^2}{\sum_{v\in V}x_v^2}.
</math>
 
We can easily observe the similarity between the two formulas. Both the expansion ration <math>\phi(G)</math> and the spectral gap <math>d-\lambda_2</math> can be characterized by optimizations of the same objective function <math>\frac{\sum_{uv\in E}(x_u-x_v)^2}{\sum_{v\in V}x_v^2}</math> over different domains (for the spectral gap, the optimization is over all <math>x\bot\boldsymbol{1}</math>; and for the expansion ratio, it is over all such vectors <math>x\in\{0,1\}^n</math> with at most <math>n/2</math> many 1-entries).
 
-------
;Notations
Throughout the proof, we assume that <math>G(V,E)</math> is the <math>d</math>-regular graph of <math>n</math> vertices, <math>A</math> is the adjacency matrix, whose eigenvalues are <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>, and <math>L=(dI-A)</math> is the graph Laplacian.
 
==== Large spectral gap implies high expansion ====
{{Theorem
|Cheeger's inequality (lower bound)|
:<math>
\phi(G)\ge\frac{d-\lambda_2}{2}.
</math>
}}
{{Proof|
Let <math>S^*\subset V</math>, <math>|S^*|\le\frac{n}{2}</math>, be the vertex set achieving the optimal expansion ratio <math>\phi(G)=\min_{\overset{S\subset V}{|S|\le\frac{n}{2}}} \frac{|\partial S|}{|S|}=\frac{|\partial S^*|}{|S^*|}</math>, and <math>x\in\mathbb{R}^n</math> be a vector defined as
:<math>
x_v=\begin{cases}
1/|S^*| & \text{if } v\in S^*,\\
-1/\left|\overline{S^*}\right| &\text{if }v\in\overline{S^*}.
\end{cases}
</math>
Clearly, <math>x\cdot \boldsymbol{1}=\sum_{v\in S^*}\frac{1}{|S^*|} -\sum_{v\in\overline{S^*}}\frac{1}{\left|\overline{S^*}\right|}=0</math>, thus <math>x\bot\boldsymbol{1}</math>.
 
Due to the variational characterization of the second eigenvalue,
:<math>
\begin{align}
d-\lambda_2
&\le\frac{\sum_{uv\in E}(x_u-x_v)^2}{\sum_{v\in V}x_v^2}\\
&=\frac{\sum_{u\in S^*,v\in\overline{S^*},uv\in E}\left(1/|S^*|+1/|\overline{S^*}|\right)^2}{1/|S^*|+1/|\overline{S^*}|}\\
&=\left(\frac{1}{|S^*|}+\frac{1}{\left|\overline{S^*}\right|}\right)\cdot|\partial S^*|\\
&\le \frac{2|\partial S^*|}{|S^*|} &(\text{since }|S^*|\le\frac{n}{2})\\
&=2\phi(G).
\end{align}
</math>
}}
 
==== High expansion implies large spectral gap ====
We next prove the upper bound direction of the Cheeger's inequality:
{{Theorem|Cheeger's inequality (upper bound)|
:<math>
\phi(G) \le \sqrt{2d(d-\lambda_2)}.
</math>
}}
 
This direction is harder than the lower bound direction. But it is mathematically more interesting and also more useful to us for analyzing the mixing time of random walks.
 
We prove the following equivalent inequality:
:<math>
\frac{\phi^2}{2d} \le d-\lambda_2.
</math>
 
Let <math>x</math> satisfy that
* <math>Ax=\lambda_2x</math>, i.e., it is a eigenvector for <math>\lambda_2</math>;
* <math>|\{v\in V\mid x_v>0\}|\le\frac{n}{2}</math>, i.e., <math>x</math> has at most <math>n/2</math> positive entries. (We can always choose <math>x</math> to be <math>-x</math> if this is not satisfied.)
And let nonnegative vector <math>y</math> be defined as
:<math>y_v=\begin{cases}
x_v & x_v>0,\\
0 & \text{otherwise.}
\end{cases}</math>
We then prove the following inequalities:
# <math>\frac{y^TLy}{y^Ty}\le d-\lambda_2</math>;
# <math>\frac{\phi^2}{2d}\le\frac{y^TLy}{y^Ty}</math>.
The theorem is then a simple consequence by combining these two inequalities.
 
We prove the first inequality:
{{Theorem|Lemma|
:<math>\frac{y^TLy}{y^Ty}\le d-\lambda_2</math>.
}}
{{Proof|
If <math>x_u\ge 0</math>, then
:<math>
\begin{align}
(Ly)_u
&=((dI-A)y)_u
=dy_u-\sum_{v}A(u,v)y_v
=dx_u-\sum_{v:x_v\ge 0}A(u,v)x_v\\
&\le dx_u-\sum_{v}A(u,v)x_v
=((dI-A)x)_u
=(d-\lambda_2)x_u.
\end{align}
</math>
Then
:<math>
\begin{align}
y^TLy
&=\sum_{u}y_u(Ly)_u
=\sum_{u:x_u\ge 0}y_u(Ly)_u
=\sum_{u:x_u\ge 0}x_u(Ly)_u\\
&\le (d-\lambda_2)\sum_{u:x_u\ge 0}x_u^2
=(d-\lambda_2)\sum_{u}y_u^2
=(d-\lambda_2)y^Ty,
\end{align}
</math>
which proves the lemma.
}}
 
We then prove the second inequality:
{{Theorem|Lemma|
:<math>\frac{\phi^2}{2d}\le\frac{y^TLy}{y^Ty}</math>.
}}
{{Proof|
To prove this, we introduce a new quantity <math>\frac{\sum_{uv\in E}|y_u^2-y_v^2|}{y^Ty}</math> and shows that
:<math>\phi\le\frac{\sum_{uv\in E}|y_u^2-y_v^2|}{y^Ty}\le\sqrt{2d}\cdot\sqrt{\frac{y^TLy}{y^Ty}}</math>.
This will give us the desired inequality <math>\frac{\phi^2}{2d}\le\frac{y^TLy}{y^Ty}</math>.
 
{{Theorem|Lemma|
:<math>\frac{\sum_{uv\in E}|y_u^2-y_v^2|}{y^Ty}\le\sqrt{2d}\cdot\sqrt{\frac{y^TLy}{y^Ty}}</math>.
}}
{{Proof|
By the Cauchy-Schwarz Inequality,
:<math>
\begin{align}
\sum_{uv\in E}|y_u^2-y_v^2|
&=\sum_{uv\in E}|y_u-y_v||y_u+y_v|\\
&\le\sqrt{\sum_{uv\in E}(y_u-y_v)^2}\cdot\sqrt{\sum_{uv\in E}(y_u+y_v)^2}.
\end{align}
</math>
By the Laplacian property, the first term <math>\sqrt{\sum_{uv\in E}(y_u-y_v)^2}=\sqrt{y^TLy}</math>. By the Inequality of Arithmetic and Geometric Means, the second term
:<math>
\sqrt{\sum_{uv\in E}(y_u+y_v)^2}
\le\sqrt{2\sum_{uv\in E}(y_u^2+y_v^2)}
=\sqrt{2d\sum_{u\in V}y_u^2}
=\sqrt{2dy^Ty}.
</math>
Combining them together, we have
:<math>\sum_{uv\in E}|y_u^2-y_v^2|\le\sqrt{2d}\cdot\sqrt{y^TLy}\cdot\sqrt{y^Ty}</math>.
}}
 
{{Theorem|Lemma|
:<math>\phi\le\frac{\sum_{uv\in E}|y_u^2-y_v^2|}{y^Ty}</math>.
}}
{{Proof|
Suppose that <math>y</math> has <math>t</math> nonzero entries. We know that <math>t\le n/2</math> due to the definition of <math>y</math>. We enumerate the vertices <math>u_1,u_2,\ldots,u_n\in V</math> such that
:<math>y_{u_1}\ge y_{u_2}\ge\cdots\ge y_{u_t}>y_{u_{t+1}}=\cdots=y_{u_n}=0</math>.
Then
:<math>
\begin{align}
\sum_{uv\in E}|y_u^2-y_v^2|
&=\sum_{u_iu_j\in E\atop i<j}(y_{u_i}^2-y_{u_j}^2)
=\sum_{u_iu_j\in E\atop i<j}\sum_{k=i}^{j-1}(y_{u_k}^2-y_{u_{k+1}}^2)\\
&=\sum_{i=1}^n\sum_{j>i}A(u_i,u_j)\sum_{k=i}^{j-1}(y_{u_k}^2-y_{u_{k+1}}^2)
=\sum_{i=1}^n\sum_{j>i}\sum_{k=i}^{j-1}A(u_i,u_j)(y_{u_k}^2-y_{u_{k+1}}^2).
\end{align}
</math>
We have the following universal equation for sums:
:<math>
\begin{align}
\sum_{i=1}^n\sum_{j>i}\sum_{k=i}^{j-1}A(u_i,u_j)(y_{u_k}^2-y_{u_{k+1}}^2)
&=\sum_{k=1}^n\sum_{i\le k}\sum_{j>k}A(u_i,u_j)(y_{u_k}^2-y_{u_{k+1}}^2)\\
&=\sum_{k=1}^t(y_{u_k}^2-y_{u_{k+1}}^2)\sum_{i\le k}\sum_{j>k}A(u_i,u_j)
\end{align}
</math>
Notice that <math>\sum_{i\le k}\sum_{j>k}A(u_i,u_j)=|\partial\{u_1,\ldots, u_k\}|</math>, which is at most <math>\phi k</math> since <math>k\le t\le n/2</math>. Therefore, combining these together, we have
:<math>
\begin{align}
\sum_{uv\in E}|y_u^2-y_v^2|
&=\sum_{k=1}^t(y_{u_k}^2-y_{u_{k+1}}^2)\sum_{i\le k}\sum_{j>k}A(u_i,u_j)\\
&=\sum_{k=1}^t(y_{u_k}^2-y_{u_{k+1}}^2)|\partial\{u_1,\ldots, u_k\}|\\
&\le \phi\sum_{k=1}^t(y_{u_k}^2-y_{u_{k+1}}^2)k\\
&=\phi\sum_{k=1}^ty_{u_k}^2\\
&=\phi y^Ty.
\end{align}
</math>
}}
}}
 
= Spectral approach for symmetric chain =
We consider the '''symmetric Markov chains''' defined on the state space <math>\Omega</math>, where the transition matrix <math>P</math> is symmetric.
 
We have the following powerful spectral theorem for symmetric matrices.
 
{{Theorem|Theorem (Spectral theorem)|
:Let <math>S</math> be a symmetric <math>n\times n</math> matrix, whose eigenvalues are <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>. There exist eigenvectors <math>v_1,v_2,\ldots,v_n</math> such that <math>v_iS=\lambda_iv_i</math> for all <math>i=1,\ldots, n</math> and <math>v_1,v_2,\ldots,v_n</math> are '''orthonormal'''.
}}
A set of orthonormal vectors <math>v_1,v_2,\ldots,v_n</math> satisfy that
* for any <math>i\neq j</math>, <math>v_i</math> is orthogonal to <math>v_j</math>, which means that <math>v_i^Tv_j=0</math>, denoted <math>v_i\bot v_j</math>;
* all <math>v_i</math> have unit length, i.e., <math>\|v_i\|_2=1</math>.
Since the eigenvectors <math>v_1,v_2,\ldots,v_n</math> are orthonormal, we can use them as orthogonal basis, so that any vector <math>x\in\mathbb{R}^n</math> can be expressed as <math>x=\sum_{i=1}^nc_iv_i</math> where <math>c_i=q^Tv_i</math>, therefore
:<math>xS=\sum_{i=1}^nc_iv_iS=\sum_{i=1}^nc_i\lambda_iv_i.</math>
So multiplying by <math>S</math> corresponds to multiplying the length of <math>x</math> along the direction of every eigenvector by a factor of the corresponding eigenvalue.
 
-----
Back to the symmetric Markov chain. Let <math>\Omega</math> be a finite state space of size <math>N</math>, and <math>P</math> be a symmetric transition matrix, whose eigenvalues are <math>\lambda_1\ge\lambda_2\ge\ldots\ge \lambda_N</math>. The followings hold for a symmetric transition matrix <math>P</math>:
*Due to the spectral theorem,  there exist orthonormal eigenvectors <math>v_1,v_2,\ldots,v_N</math> such that <math>v_iP=\lambda_iv_i</math> for <math>i=1,2,\ldots,N</math> and any distribution <math>q</math> over <math>\Omega</math> can be expressed as <math>q=\sum_{i=1}^Nc_iv_i</math> where <math>c_i=q^Tv_i</math>.
*A symmetric <math>P</math> must be double stochastic, thus the stationary distribution <math>\pi</math> is the uniform distribution.
 
Recall that due to Perron-Frobenius theorem, <math>\lambda_1=1</math>. And <math>\boldsymbol{1}P=\boldsymbol{1}</math> since <math>P</math> is double stochastic, thus <math>v_1=\frac{\boldsymbol{1}}{\|\boldsymbol{1}\|_2}=\left(\frac{1}{\sqrt{N}},\ldots,\frac{1}{\sqrt{N}}\right)</math>.
 
When <math>q</math> is a distribution, i.e., <math>q</math> is a nonnegative vector and <math>\|q\|_1=1</math>, it holds that <math>c_1=q^Tv_1=\frac{1}{\sqrt{N}}</math> and
<math>c_1v_1=\left(\frac{1}{N},\ldots,\frac{1}{N}\right)=\pi</math>, thus
:<math>
q=\sum_{i=1}^Nc_iv_i=\pi+\sum_{i=2}^Nc_iv_i,
</math>
and the distribution at time <math>t</math> when the initial distribution is <math>q</math>, is given by
:<math>
qP^t=\pi P^t+\sum_{i=2}^Nc_iv_iP^t=\pi+\sum_{i=2}^Nc_i\lambda_i^tv_i.
</math>
It is easy to see that this distribution converges to <math>\pi</math> when the absolute values of <math>\lambda_2,\ldots,\lambda_N</math> are all less than 1. And the rate at which it converges to <math>\pi</math>, namely the mixing rate, is determined by the quantity <math>\lambda_{\max}=\max\{|\lambda_2|,|\lambda_N|\}\,</math>, which is the largest absolute eigenvalues other than <math>\lambda_1</math>.
 
{{Theorem|Theorem|
:Let <math>P</math> be the transition matrix for a symmetric Markov chain on state space <math>\Omega</math> where <math>|\Omega|=N</math>. Let <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_N</math> be the spectrum of <math>P</math> and <math>\lambda_{\max}=\max\{|\lambda_2|,|\lambda_N|\}\,</math>. The mixing rate of the Markov chain is
::<math>\tau(\epsilon)\le\frac{\frac{1}{2}\ln N+\ln\frac{1}{2\epsilon}}{1-\lambda_{\text{max}}}</math>.
}}
{{Proof|
As analysed above, if <math>P</math> is symmetric, it has orthonormal eigenvectors <math>v_1,\ldots,v_N</math> such that any distribution <math>q</math> over <math>\Omega</math> can be expressed as
:<math>q=\sum_{i=1}^Nc_iv_i=\pi+\sum_{i=2}^Nc_iv_i</math>
with <math>c_i=q^Tv_i</math>, and
:<math>
qP^t=\pi+\sum_{i=2}^Nc_i\lambda_i^tv_i.
</math>
Thus,
:<math>
\begin{align}
\|qP^t-\pi\|_1
&=
\left\|\sum_{i=2}^Nc_i\lambda_i^tv_i\right\|_1\\
&\le
\sqrt{N}\left\|\sum_{i=2}^Nc_i\lambda_i^tv_i\right\|_2 &\quad\text{(Cauchy-Schwarz)}\\
&=
\sqrt{N}\sqrt{\sum_{i=2}^Nc_i^2\lambda_i^{2t}}\\
&\le
\sqrt{N}\lambda_{\max}^t\sqrt{\sum_{i=2}^Nc_i^2}\\
&=
\sqrt{N}\lambda_{\max}^t\|q\|_2\\
&\le
\sqrt{N}\lambda_{\max}^t.
\end{align}
</math>
The last inequality is due to a universal relation <math>\|q\|_2\le\|q\|_1</math> and the fact that <math>q</math> is a distribution.
 
Then for any <math>x\in\Omega</math>, denoted by <math>\boldsymbol{1}_x</math> the indicator vector for <math>x</math> such that <math>\boldsymbol{1}_x(x)=1</math> and <math>\boldsymbol{1}_x(y)=0</math> for <math>y\neq x</math>, we have
:<math>
\begin{align}
\Delta_x(t)
&=\left\|\boldsymbol{1}_x P^t-\pi\right\|_{TV}=\frac{1}{2}\left\|\boldsymbol{1}_x P^t-\pi\right\|_1\\
&\le\frac{\sqrt{N}}{2}\lambda_{\max}^t\le\frac{\sqrt{N}}{2}\mathrm{e}^{-t(1-\lambda_{\max})}.
\end{align}
</math>
Therefore, we have
:<math>
\tau_x(\epsilon)
=\min\{t\mid\Delta_x(t)\le\epsilon\}
\le\frac{\frac{1}{2}\ln N+\ln\frac{1}{2\epsilon}}{1-\lambda_{\max}}
</math>
for any <math>x\in\Omega</math>, thus the bound holds for <math>\tau(\epsilon)=\max_{x}\tau_x(\epsilon)</math>.
}}
}}


== Rapid mixing of expander walk ==
Show that the Markov chain is:
Let <math>G(V,E)</math> be a <math>d</math>-regular graph on <math>n</math> vertices. Let <math>A</math> be its adjacency matrix. The transition matrix of the lazy random walk on <math>G</math> is given by <math>P=\frac{1}{2}\left(I+\frac{1}{d}A\right)</math>. Specifically,
# ergodic (i.e., aperiodic);
:<math>P(u,v)=\begin{cases}
# irreducible if <math>q\ge \Delta+2</math>;
\frac{1}{2} & \text{if }u=v,\\
# with uniform stationary distribution.
\frac{1}{2d} & \text{if }uv\in E,\\
0 & \text{otherwise.}
\end{cases}</math>
Obviously <math>P</math> is symmetric.


Let <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math> be the eigenvalues of <math>A</math>, and <math>\nu_1\ge\nu_2\ge\cdots\ge\nu_n</math> be the eigenvalues of <math>P</math>. It is easy to verify that
== Problem 2 ==
:<math>\nu_i=\frac{1}{2}\left(1+\frac{\lambda_i}{d}\right)</math>.
Let <math>G(V,E)</math> be an undirected graph with known maximum degree <math>\Delta</math>. You want to design random walks over the vertices in <math>V</math> such that at each step only adjacent vertices are allowed to walk to (i.e., the transition matrix <math>P</math> is defined on <math>V\times V</math> and <math>P(u,v)>0</math> only if <math>uv\in E</math>). You are allowed to design your transition matrix to achieve different stationary distributions.
We know that <math>-d\le\lambda_i\le d</math>, thus <math>0\le\nu_i\le1</math>. Therefore, <math>\nu_{\max}=\max\{|\nu_2|,|\nu_n|\}=\nu_2\,</math>. Due to the above analysis of symmetric Markov chain,
* Suppose that <math>G</math> is not necessarily regular (every vertex has the same degree). Design a time-reversible lazy random walk which is ergodic and always converges to uniform stationary distribution.
:<math>\tau(\epsilon)\le\frac{\frac{1}{2}(\ln n+\ln\frac{1}{2\epsilon})}{1-\nu_{\max}}=\frac{\frac{1}{2}(\ln n+\ln\frac{1}{2\epsilon})}{1-\nu_2}=\frac{d(\ln n+\ln\frac{1}{2\epsilon})}{d-\lambda_2}</math>.
* Given an arbitrary distribution <math>\pi</math> over <math>V</math>, design a time-reversible lazy random walk which is ergodic and always converges to the stationary distribution <math>\pi</math>.
Thus we prove the following theorem for lazy random walk on <math>d</math>-regular graphs.
* (bonus question) Try to give sufficient conditions in terms of <math>G</math> and <math>\pi</math> for the rapid mixing of the above two random walks.
{{Theorem|Theorem|
:Let <math>G(V,E)</math> be a <math>d</math>-regular graph on <math>n</math> vertices, with spectrum <math>\lambda_1\ge\lambda_2\ge\cdots\ge\lambda_n</math>. The mixing rate of lazy random walk on <math>G</math> is
::<math>\tau(\epsilon)\le\frac{d(\ln n+\ln\frac{1}{2\epsilon})}{d-\lambda_2}</math>.
}}
 
Due to Cheeger's inequality, the spectral gap is bounded by expansion ratio as
:<math>d-\lambda_2\ge\frac{\phi^2}{2d},</math>
where <math>\phi=\phi(G)</math> is the expansion ratio of the graph <math>G</math>. Therefore, we have the following corollary which bounds the mixing time by graph expansion.
{{Theorem|Corollary|
:Let <math>G(V,E)</math> be a <math>d</math>-regular graph on <math>n</math> vertices, whose expansion ratio is <math>\phi=\phi(G)</math>. The mixing rate of lazy random walk on <math>G</math> is
::<math>\tau(\epsilon)\le\frac{2d^2(\ln n+\ln\frac{1}{2\epsilon})}{\phi^2}</math>.
:In particular, the mixing time is
::<math>\tau_{\text{mix}}=\tau(1/2\mathrm{e})\le\frac{2d^2(\ln n+2)}{\phi^2}</math>.
}}


For expander graphs, both <math>d</math> and <math>\phi</math> are constants. The mixing time of lazy random walk is <math>\tau_{\text{mix}}=O(\ln n)</math> so the random walk is rapidly mixing.
== Problem 3 ==
Let <math>G(V,E)</math> be a connected undirected simple graph (no self-loops and parallel edges) defined on <math>n</math> vertices. Let <math>\phi(G)</math> be the expansion ratio of <math>G</math>. <math>G</math> is NOT necessarily regular. For any <math>v\in V</math>, let <math>d_v</math> be the degree of vertex <math>v</math>.
* What is the largest possible value for <math>\phi(G)</math>? Construct a graph <math>G</math> with this expansion ratio and explain why it is the largest.
* What is the smallest possible value for <math>\phi(G)</math>? Construct a graph <math>G</math> with this expansion ratio and explain why it is the smallest.
* Run a lazy random walk on <math>G</math>. What is the stationary distribution? Starting from an arbitrary vertex in an arbitrary unknown <math>G</math>, how long in the worst case should you run the random walk to guarantee the distribution of the current position is within a total variation distance of <math>\epsilon</math> from the stationary distribution? Give an upper bound of the time in terms of <math>n</math> and <math>\epsilon</math>.
* Suppose that the maximum degree of <math>G</math> is known but the graph is not necessarily regular. Design a random walk on <math>G</math> with uniform stationary distribution. How long should you run the random walk to be within <math>\epsilon</math>-close to the uniform distribution in the worst case?

Revision as of 08:45, 2 June 2014

Problem 1

A proper [math]\displaystyle{ q }[/math]-coloring of a graph [math]\displaystyle{ G(V,E) }[/math] is a mapping [math]\displaystyle{ f:V\to [q] }[/math] such that for any edge [math]\displaystyle{ uv\in E }[/math] we have [math]\displaystyle{ f(u)\neq f(v) }[/math].

Consider the following Markov chain for proper [math]\displaystyle{ q }[/math]-colorings of a graph [math]\displaystyle{ G(V,E) }[/math]:

Markov Chain for Graph Coloring
Start with a proper [math]\displaystyle{ q }[/math]-coloring of [math]\displaystyle{ G(V,E) }[/math]. At each step:
  1. Pick a vertex [math]\displaystyle{ v\in V }[/math] and a color [math]\displaystyle{ c\in[q] }[/math] uniformly at random.
  2. Change the color of [math]\displaystyle{ v }[/math] to [math]\displaystyle{ c }[/math] if the resulting coloring is proper; do nothing if otherwise.

Show that the Markov chain is:

  1. ergodic (i.e., aperiodic);
  2. irreducible if [math]\displaystyle{ q\ge \Delta+2 }[/math];
  3. with uniform stationary distribution.

Problem 2

Let [math]\displaystyle{ G(V,E) }[/math] be an undirected graph with known maximum degree [math]\displaystyle{ \Delta }[/math]. You want to design random walks over the vertices in [math]\displaystyle{ V }[/math] such that at each step only adjacent vertices are allowed to walk to (i.e., the transition matrix [math]\displaystyle{ P }[/math] is defined on [math]\displaystyle{ V\times V }[/math] and [math]\displaystyle{ P(u,v)\gt 0 }[/math] only if [math]\displaystyle{ uv\in E }[/math]). You are allowed to design your transition matrix to achieve different stationary distributions.

  • Suppose that [math]\displaystyle{ G }[/math] is not necessarily regular (every vertex has the same degree). Design a time-reversible lazy random walk which is ergodic and always converges to uniform stationary distribution.
  • Given an arbitrary distribution [math]\displaystyle{ \pi }[/math] over [math]\displaystyle{ V }[/math], design a time-reversible lazy random walk which is ergodic and always converges to the stationary distribution [math]\displaystyle{ \pi }[/math].
  • (bonus question) Try to give sufficient conditions in terms of [math]\displaystyle{ G }[/math] and [math]\displaystyle{ \pi }[/math] for the rapid mixing of the above two random walks.

Problem 3

Let [math]\displaystyle{ G(V,E) }[/math] be a connected undirected simple graph (no self-loops and parallel edges) defined on [math]\displaystyle{ n }[/math] vertices. Let [math]\displaystyle{ \phi(G) }[/math] be the expansion ratio of [math]\displaystyle{ G }[/math]. [math]\displaystyle{ G }[/math] is NOT necessarily regular. For any [math]\displaystyle{ v\in V }[/math], let [math]\displaystyle{ d_v }[/math] be the degree of vertex [math]\displaystyle{ v }[/math].

  • What is the largest possible value for [math]\displaystyle{ \phi(G) }[/math]? Construct a graph [math]\displaystyle{ G }[/math] with this expansion ratio and explain why it is the largest.
  • What is the smallest possible value for [math]\displaystyle{ \phi(G) }[/math]? Construct a graph [math]\displaystyle{ G }[/math] with this expansion ratio and explain why it is the smallest.
  • Run a lazy random walk on [math]\displaystyle{ G }[/math]. What is the stationary distribution? Starting from an arbitrary vertex in an arbitrary unknown [math]\displaystyle{ G }[/math], how long in the worst case should you run the random walk to guarantee the distribution of the current position is within a total variation distance of [math]\displaystyle{ \epsilon }[/math] from the stationary distribution? Give an upper bound of the time in terms of [math]\displaystyle{ n }[/math] and [math]\displaystyle{ \epsilon }[/math].
  • Suppose that the maximum degree of [math]\displaystyle{ G }[/math] is known but the graph is not necessarily regular. Design a random walk on [math]\displaystyle{ G }[/math] with uniform stationary distribution. How long should you run the random walk to be within [math]\displaystyle{ \epsilon }[/math]-close to the uniform distribution in the worst case?